00:00:00.001 Started by upstream project "autotest-per-patch" build number 132319 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 25764 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:03.769 The recommended git tool is: git 00:00:03.770 using credential 00000000-0000-0000-0000-000000000002 00:00:03.772 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:03.786 Fetching changes from the remote Git repository 00:00:03.792 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:03.807 Using shallow fetch with depth 1 00:00:03.807 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:03.807 > git --version # timeout=10 00:00:03.820 > git --version # 'git version 2.39.2' 00:00:03.820 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:03.832 Setting http proxy: proxy-dmz.intel.com:911 00:00:03.832 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/84/24384/13 # timeout=5 00:00:09.342 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.356 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.371 Checking out Revision 6d4840695fb479ead742a39eb3a563a20cd15407 (FETCH_HEAD) 00:00:09.371 > git config core.sparsecheckout # timeout=10 00:00:09.385 > git read-tree -mu HEAD # timeout=10 00:00:09.404 > git checkout -f 6d4840695fb479ead742a39eb3a563a20cd15407 # timeout=5 00:00:09.430 Commit message: "jenkins/jjb-config: Commonize distro-based params" 00:00:09.431 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.660 [Pipeline] Start of Pipeline 00:00:09.671 [Pipeline] library 00:00:09.672 Loading library shm_lib@master 00:00:09.673 Library shm_lib@master is cached. Copying from home. 00:00:09.686 [Pipeline] node 00:00:09.695 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.696 [Pipeline] { 00:00:09.712 [Pipeline] catchError 00:00:09.713 [Pipeline] { 00:00:09.724 [Pipeline] wrap 00:00:09.730 [Pipeline] { 00:00:09.738 [Pipeline] stage 00:00:09.740 [Pipeline] { (Prologue) 00:00:09.955 [Pipeline] sh 00:00:10.238 + logger -p user.info -t JENKINS-CI 00:00:10.256 [Pipeline] echo 00:00:10.257 Node: WFP6 00:00:10.263 [Pipeline] sh 00:00:10.554 [Pipeline] setCustomBuildProperty 00:00:10.562 [Pipeline] echo 00:00:10.563 Cleanup processes 00:00:10.566 [Pipeline] sh 00:00:10.844 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.844 3652250 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.853 [Pipeline] sh 00:00:11.129 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.129 ++ grep -v 'sudo pgrep' 00:00:11.129 ++ awk '{print $1}' 00:00:11.129 + sudo kill -9 00:00:11.129 + true 00:00:11.139 [Pipeline] cleanWs 00:00:11.146 [WS-CLEANUP] Deleting project workspace... 00:00:11.146 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.151 [WS-CLEANUP] done 00:00:11.154 [Pipeline] setCustomBuildProperty 00:00:11.162 [Pipeline] sh 00:00:11.436 + sudo git config --global --replace-all safe.directory '*' 00:00:11.521 [Pipeline] httpRequest 00:00:11.895 [Pipeline] echo 00:00:11.897 Sorcerer 10.211.164.20 is alive 00:00:11.905 [Pipeline] retry 00:00:11.906 [Pipeline] { 00:00:11.918 [Pipeline] httpRequest 00:00:11.922 HttpMethod: GET 00:00:11.922 URL: http://10.211.164.20/packages/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:11.922 Sending request to url: http://10.211.164.20/packages/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:11.929 Response Code: HTTP/1.1 200 OK 00:00:11.930 Success: Status code 200 is in the accepted range: 200,404 00:00:11.930 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:24.245 [Pipeline] } 00:00:24.262 [Pipeline] // retry 00:00:24.270 [Pipeline] sh 00:00:24.554 + tar --no-same-owner -xf jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:24.571 [Pipeline] httpRequest 00:00:24.953 [Pipeline] echo 00:00:24.955 Sorcerer 10.211.164.20 is alive 00:00:24.965 [Pipeline] retry 00:00:24.967 [Pipeline] { 00:00:24.981 [Pipeline] httpRequest 00:00:24.986 HttpMethod: GET 00:00:24.986 URL: http://10.211.164.20/packages/spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz 00:00:24.986 Sending request to url: http://10.211.164.20/packages/spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz 00:00:24.992 Response Code: HTTP/1.1 200 OK 00:00:24.992 Success: Status code 200 is in the accepted range: 200,404 00:00:24.993 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz 00:04:00.811 [Pipeline] } 00:04:00.832 [Pipeline] // retry 00:04:00.841 [Pipeline] sh 00:04:01.127 + tar --no-same-owner -xf spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz 00:04:03.688 [Pipeline] sh 00:04:03.975 + git -C spdk log --oneline -n5 00:04:03.975 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:04:03.975 53ca6a885 bdev/nvme: Rearrange fields in spdk_bdev_nvme_opts to reduce holes. 00:04:03.975 03b7aa9c7 bdev/nvme: Move the spdk_bdev_nvme_opts and spdk_bdev_timeout_action struct to the public header. 00:04:03.975 d47eb51c9 bdev: fix a race between reset start and complete 00:04:03.975 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:04:03.987 [Pipeline] } 00:04:04.001 [Pipeline] // stage 00:04:04.010 [Pipeline] stage 00:04:04.012 [Pipeline] { (Prepare) 00:04:04.030 [Pipeline] writeFile 00:04:04.045 [Pipeline] sh 00:04:04.329 + logger -p user.info -t JENKINS-CI 00:04:04.343 [Pipeline] sh 00:04:04.627 + logger -p user.info -t JENKINS-CI 00:04:04.639 [Pipeline] sh 00:04:04.975 + cat autorun-spdk.conf 00:04:04.975 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:04.975 SPDK_TEST_NVMF=1 00:04:04.975 SPDK_TEST_NVME_CLI=1 00:04:04.975 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:04.975 SPDK_TEST_NVMF_NICS=e810 00:04:04.975 SPDK_TEST_VFIOUSER=1 00:04:04.975 SPDK_RUN_UBSAN=1 00:04:04.975 NET_TYPE=phy 00:04:04.995 RUN_NIGHTLY=0 00:04:05.000 [Pipeline] readFile 00:04:05.029 [Pipeline] withEnv 00:04:05.032 [Pipeline] { 00:04:05.047 [Pipeline] sh 00:04:05.335 + set -ex 00:04:05.335 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:05.335 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:05.335 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:05.335 ++ SPDK_TEST_NVMF=1 00:04:05.335 ++ SPDK_TEST_NVME_CLI=1 00:04:05.335 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:05.335 ++ SPDK_TEST_NVMF_NICS=e810 00:04:05.335 ++ SPDK_TEST_VFIOUSER=1 00:04:05.335 ++ SPDK_RUN_UBSAN=1 00:04:05.335 ++ NET_TYPE=phy 00:04:05.335 ++ RUN_NIGHTLY=0 00:04:05.335 + case $SPDK_TEST_NVMF_NICS in 00:04:05.335 + DRIVERS=ice 00:04:05.335 + [[ tcp == \r\d\m\a ]] 00:04:05.335 + [[ -n ice ]] 00:04:05.335 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:05.335 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:05.335 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:05.335 rmmod: ERROR: Module irdma is not currently loaded 00:04:05.335 rmmod: ERROR: Module i40iw is not currently loaded 00:04:05.335 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:05.335 + true 00:04:05.335 + for D in $DRIVERS 00:04:05.335 + sudo modprobe ice 00:04:05.335 + exit 0 00:04:05.345 [Pipeline] } 00:04:05.360 [Pipeline] // withEnv 00:04:05.365 [Pipeline] } 00:04:05.380 [Pipeline] // stage 00:04:05.390 [Pipeline] catchError 00:04:05.391 [Pipeline] { 00:04:05.407 [Pipeline] timeout 00:04:05.407 Timeout set to expire in 1 hr 0 min 00:04:05.410 [Pipeline] { 00:04:05.426 [Pipeline] stage 00:04:05.428 [Pipeline] { (Tests) 00:04:05.445 [Pipeline] sh 00:04:05.735 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:05.735 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:05.735 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:05.735 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:05.735 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:05.735 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:05.735 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:05.735 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:05.735 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:05.735 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:05.735 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:05.735 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:05.735 + source /etc/os-release 00:04:05.735 ++ NAME='Fedora Linux' 00:04:05.735 ++ VERSION='39 (Cloud Edition)' 00:04:05.735 ++ ID=fedora 00:04:05.735 ++ VERSION_ID=39 00:04:05.735 ++ VERSION_CODENAME= 00:04:05.735 ++ PLATFORM_ID=platform:f39 00:04:05.735 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:05.735 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:05.735 ++ LOGO=fedora-logo-icon 00:04:05.735 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:05.735 ++ HOME_URL=https://fedoraproject.org/ 00:04:05.735 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:05.735 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:05.735 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:05.735 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:05.735 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:05.735 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:05.735 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:05.735 ++ SUPPORT_END=2024-11-12 00:04:05.735 ++ VARIANT='Cloud Edition' 00:04:05.735 ++ VARIANT_ID=cloud 00:04:05.735 + uname -a 00:04:05.735 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:05.735 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:08.279 Hugepages 00:04:08.279 node hugesize free / total 00:04:08.279 node0 1048576kB 0 / 0 00:04:08.279 node0 2048kB 0 / 0 00:04:08.279 node1 1048576kB 0 / 0 00:04:08.279 node1 2048kB 0 / 0 00:04:08.279 00:04:08.279 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.279 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:08.279 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:08.279 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:08.279 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:08.279 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:08.279 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:08.279 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:08.279 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:08.279 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:08.279 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:08.279 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:08.279 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:08.279 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:08.279 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:08.279 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:08.279 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:08.279 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:08.279 + rm -f /tmp/spdk-ld-path 00:04:08.279 + source autorun-spdk.conf 00:04:08.279 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:08.279 ++ SPDK_TEST_NVMF=1 00:04:08.279 ++ SPDK_TEST_NVME_CLI=1 00:04:08.279 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:08.279 ++ SPDK_TEST_NVMF_NICS=e810 00:04:08.279 ++ SPDK_TEST_VFIOUSER=1 00:04:08.279 ++ SPDK_RUN_UBSAN=1 00:04:08.279 ++ NET_TYPE=phy 00:04:08.279 ++ RUN_NIGHTLY=0 00:04:08.279 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:08.279 + [[ -n '' ]] 00:04:08.279 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.279 + for M in /var/spdk/build-*-manifest.txt 00:04:08.279 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:08.279 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:08.279 + for M in /var/spdk/build-*-manifest.txt 00:04:08.279 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:08.279 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:08.279 + for M in /var/spdk/build-*-manifest.txt 00:04:08.279 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:08.279 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:08.279 ++ uname 00:04:08.279 + [[ Linux == \L\i\n\u\x ]] 00:04:08.279 + sudo dmesg -T 00:04:08.540 + sudo dmesg --clear 00:04:08.540 + dmesg_pid=3653714 00:04:08.540 + [[ Fedora Linux == FreeBSD ]] 00:04:08.540 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:08.540 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:08.540 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:08.540 + [[ -x /usr/src/fio-static/fio ]] 00:04:08.540 + export FIO_BIN=/usr/src/fio-static/fio 00:04:08.540 + FIO_BIN=/usr/src/fio-static/fio 00:04:08.540 + sudo dmesg -Tw 00:04:08.540 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:08.540 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:08.540 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:08.540 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:08.540 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:08.540 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:08.540 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:08.540 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:08.540 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:08.540 10:31:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:08.540 10:31:58 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:08.540 10:31:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:08.540 10:31:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:08.540 10:31:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:04:08.540 10:31:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:08.540 10:31:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:04:08.540 10:31:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:04:08.540 10:31:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:04:08.540 10:31:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:04:08.540 10:31:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:04:08.540 10:31:58 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:08.540 10:31:58 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:08.540 10:31:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:08.540 10:31:58 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:08.540 10:31:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:08.540 10:31:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:08.540 10:31:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.540 10:31:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.540 10:31:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.540 10:31:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.540 10:31:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.540 10:31:58 -- paths/export.sh@5 -- $ export PATH 00:04:08.540 10:31:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.540 10:31:58 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:08.540 10:31:58 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:08.540 10:31:58 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732008718.XXXXXX 00:04:08.540 10:31:58 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732008718.SIuqu8 00:04:08.540 10:31:58 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:08.540 10:31:58 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:08.540 10:31:58 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:08.540 10:31:58 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:08.540 10:31:58 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:08.540 10:31:58 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:08.540 10:31:58 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:08.540 10:31:58 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.540 10:31:58 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:08.540 10:31:58 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:08.540 10:31:58 -- pm/common@17 -- $ local monitor 00:04:08.540 10:31:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.540 10:31:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.540 10:31:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.540 10:31:58 -- pm/common@21 -- $ date +%s 00:04:08.540 10:31:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.540 10:31:58 -- pm/common@21 -- $ date +%s 00:04:08.540 10:31:58 -- pm/common@25 -- $ sleep 1 00:04:08.540 10:31:58 -- pm/common@21 -- $ date +%s 00:04:08.540 10:31:58 -- pm/common@21 -- $ date +%s 00:04:08.540 10:31:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008718 00:04:08.540 10:31:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008718 00:04:08.540 10:31:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008718 00:04:08.540 10:31:58 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008718 00:04:08.801 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008718_collect-cpu-load.pm.log 00:04:08.801 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008718_collect-vmstat.pm.log 00:04:08.801 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008718_collect-cpu-temp.pm.log 00:04:08.801 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008718_collect-bmc-pm.bmc.pm.log 00:04:09.741 10:31:59 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:04:09.741 10:31:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:09.741 10:31:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:09.741 10:31:59 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:09.741 10:31:59 -- spdk/autobuild.sh@16 -- $ date -u 00:04:09.741 Tue Nov 19 09:31:59 AM UTC 2024 00:04:09.741 10:31:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:09.741 v25.01-pre-193-ga0c128549 00:04:09.741 10:31:59 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:09.741 10:31:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:09.741 10:31:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:09.741 10:31:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:09.741 10:31:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:09.741 10:31:59 -- common/autotest_common.sh@10 -- $ set +x 00:04:09.741 ************************************ 00:04:09.741 START TEST ubsan 00:04:09.741 ************************************ 00:04:09.741 10:31:59 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:09.741 using ubsan 00:04:09.741 00:04:09.741 real 0m0.000s 00:04:09.741 user 0m0.000s 00:04:09.741 sys 0m0.000s 00:04:09.741 10:31:59 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:09.741 10:31:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:09.741 ************************************ 00:04:09.741 END TEST ubsan 00:04:09.741 ************************************ 00:04:09.741 10:31:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:09.741 10:31:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:09.741 10:31:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:09.741 10:31:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:09.741 10:31:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:09.741 10:31:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:09.741 10:31:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:09.741 10:31:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:09.741 10:31:59 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:04:10.001 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:10.001 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:10.260 Using 'verbs' RDMA provider 00:04:23.056 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:35.281 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:35.281 Creating mk/config.mk...done. 00:04:35.281 Creating mk/cc.flags.mk...done. 00:04:35.281 Type 'make' to build. 00:04:35.281 10:32:24 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:04:35.281 10:32:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:35.281 10:32:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:35.281 10:32:24 -- common/autotest_common.sh@10 -- $ set +x 00:04:35.281 ************************************ 00:04:35.281 START TEST make 00:04:35.281 ************************************ 00:04:35.281 10:32:24 make -- common/autotest_common.sh@1129 -- $ make -j96 00:04:35.541 make[1]: Nothing to be done for 'all'. 00:04:36.938 The Meson build system 00:04:36.938 Version: 1.5.0 00:04:36.938 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:36.938 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:36.938 Build type: native build 00:04:36.938 Project name: libvfio-user 00:04:36.938 Project version: 0.0.1 00:04:36.938 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:36.938 C linker for the host machine: cc ld.bfd 2.40-14 00:04:36.938 Host machine cpu family: x86_64 00:04:36.938 Host machine cpu: x86_64 00:04:36.938 Run-time dependency threads found: YES 00:04:36.938 Library dl found: YES 00:04:36.938 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:36.938 Run-time dependency json-c found: YES 0.17 00:04:36.938 Run-time dependency cmocka found: YES 1.1.7 00:04:36.938 Program pytest-3 found: NO 00:04:36.938 Program flake8 found: NO 00:04:36.938 Program misspell-fixer found: NO 00:04:36.938 Program restructuredtext-lint found: NO 00:04:36.938 Program valgrind found: YES (/usr/bin/valgrind) 00:04:36.938 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:36.938 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:36.938 Compiler for C supports arguments -Wwrite-strings: YES 00:04:36.938 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:36.938 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:36.938 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:36.938 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:36.938 Build targets in project: 8 00:04:36.938 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:36.938 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:36.938 00:04:36.938 libvfio-user 0.0.1 00:04:36.938 00:04:36.938 User defined options 00:04:36.938 buildtype : debug 00:04:36.938 default_library: shared 00:04:36.938 libdir : /usr/local/lib 00:04:36.938 00:04:36.938 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:37.506 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:37.506 [1/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:37.506 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:37.506 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:37.506 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:37.506 [5/37] Compiling C object samples/null.p/null.c.o 00:04:37.506 [6/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:37.506 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:37.506 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:37.766 [9/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:37.766 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:37.766 [11/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:37.766 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:37.766 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:37.766 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:37.766 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:37.766 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:37.766 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:37.766 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:37.766 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:37.766 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:37.766 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:37.766 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:37.766 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:37.766 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:37.766 [25/37] Compiling C object samples/server.p/server.c.o 00:04:37.766 [26/37] Compiling C object samples/client.p/client.c.o 00:04:37.766 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:37.766 [28/37] Linking target samples/client 00:04:37.766 [29/37] Linking target test/unit_tests 00:04:37.766 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:37.766 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:04:38.026 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:38.026 [33/37] Linking target samples/server 00:04:38.026 [34/37] Linking target samples/null 00:04:38.026 [35/37] Linking target samples/lspci 00:04:38.026 [36/37] Linking target samples/shadow_ioeventfd_server 00:04:38.026 [37/37] Linking target samples/gpio-pci-idio-16 00:04:38.026 INFO: autodetecting backend as ninja 00:04:38.026 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:38.026 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:38.595 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:38.595 ninja: no work to do. 00:04:43.909 The Meson build system 00:04:43.909 Version: 1.5.0 00:04:43.909 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:43.909 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:43.909 Build type: native build 00:04:43.909 Program cat found: YES (/usr/bin/cat) 00:04:43.909 Project name: DPDK 00:04:43.909 Project version: 24.03.0 00:04:43.909 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:43.909 C linker for the host machine: cc ld.bfd 2.40-14 00:04:43.909 Host machine cpu family: x86_64 00:04:43.909 Host machine cpu: x86_64 00:04:43.909 Message: ## Building in Developer Mode ## 00:04:43.909 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:43.909 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:43.909 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:43.909 Program python3 found: YES (/usr/bin/python3) 00:04:43.909 Program cat found: YES (/usr/bin/cat) 00:04:43.909 Compiler for C supports arguments -march=native: YES 00:04:43.909 Checking for size of "void *" : 8 00:04:43.909 Checking for size of "void *" : 8 (cached) 00:04:43.909 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:43.909 Library m found: YES 00:04:43.909 Library numa found: YES 00:04:43.909 Has header "numaif.h" : YES 00:04:43.909 Library fdt found: NO 00:04:43.909 Library execinfo found: NO 00:04:43.909 Has header "execinfo.h" : YES 00:04:43.909 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:43.909 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:43.909 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:43.909 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:43.909 Run-time dependency openssl found: YES 3.1.1 00:04:43.909 Run-time dependency libpcap found: YES 1.10.4 00:04:43.910 Has header "pcap.h" with dependency libpcap: YES 00:04:43.910 Compiler for C supports arguments -Wcast-qual: YES 00:04:43.910 Compiler for C supports arguments -Wdeprecated: YES 00:04:43.910 Compiler for C supports arguments -Wformat: YES 00:04:43.910 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:43.910 Compiler for C supports arguments -Wformat-security: NO 00:04:43.910 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:43.910 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:43.910 Compiler for C supports arguments -Wnested-externs: YES 00:04:43.910 Compiler for C supports arguments -Wold-style-definition: YES 00:04:43.910 Compiler for C supports arguments -Wpointer-arith: YES 00:04:43.910 Compiler for C supports arguments -Wsign-compare: YES 00:04:43.910 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:43.910 Compiler for C supports arguments -Wundef: YES 00:04:43.910 Compiler for C supports arguments -Wwrite-strings: YES 00:04:43.910 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:43.910 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:43.910 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:43.910 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:43.910 Program objdump found: YES (/usr/bin/objdump) 00:04:43.910 Compiler for C supports arguments -mavx512f: YES 00:04:43.910 Checking if "AVX512 checking" compiles: YES 00:04:43.910 Fetching value of define "__SSE4_2__" : 1 00:04:43.910 Fetching value of define "__AES__" : 1 00:04:43.910 Fetching value of define "__AVX__" : 1 00:04:43.910 Fetching value of define "__AVX2__" : 1 00:04:43.910 Fetching value of define "__AVX512BW__" : 1 00:04:43.910 Fetching value of define "__AVX512CD__" : 1 00:04:43.910 Fetching value of define "__AVX512DQ__" : 1 00:04:43.910 Fetching value of define "__AVX512F__" : 1 00:04:43.910 Fetching value of define "__AVX512VL__" : 1 00:04:43.910 Fetching value of define "__PCLMUL__" : 1 00:04:43.910 Fetching value of define "__RDRND__" : 1 00:04:43.910 Fetching value of define "__RDSEED__" : 1 00:04:43.910 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:43.910 Fetching value of define "__znver1__" : (undefined) 00:04:43.910 Fetching value of define "__znver2__" : (undefined) 00:04:43.910 Fetching value of define "__znver3__" : (undefined) 00:04:43.910 Fetching value of define "__znver4__" : (undefined) 00:04:43.910 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:43.910 Message: lib/log: Defining dependency "log" 00:04:43.910 Message: lib/kvargs: Defining dependency "kvargs" 00:04:43.910 Message: lib/telemetry: Defining dependency "telemetry" 00:04:43.910 Checking for function "getentropy" : NO 00:04:43.910 Message: lib/eal: Defining dependency "eal" 00:04:43.910 Message: lib/ring: Defining dependency "ring" 00:04:43.910 Message: lib/rcu: Defining dependency "rcu" 00:04:43.910 Message: lib/mempool: Defining dependency "mempool" 00:04:43.910 Message: lib/mbuf: Defining dependency "mbuf" 00:04:43.910 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:43.910 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:43.910 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:43.910 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:43.910 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:43.910 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:43.910 Compiler for C supports arguments -mpclmul: YES 00:04:43.910 Compiler for C supports arguments -maes: YES 00:04:43.910 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:43.910 Compiler for C supports arguments -mavx512bw: YES 00:04:43.910 Compiler for C supports arguments -mavx512dq: YES 00:04:43.910 Compiler for C supports arguments -mavx512vl: YES 00:04:43.910 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:43.910 Compiler for C supports arguments -mavx2: YES 00:04:43.910 Compiler for C supports arguments -mavx: YES 00:04:43.910 Message: lib/net: Defining dependency "net" 00:04:43.910 Message: lib/meter: Defining dependency "meter" 00:04:43.910 Message: lib/ethdev: Defining dependency "ethdev" 00:04:43.910 Message: lib/pci: Defining dependency "pci" 00:04:43.910 Message: lib/cmdline: Defining dependency "cmdline" 00:04:43.910 Message: lib/hash: Defining dependency "hash" 00:04:43.910 Message: lib/timer: Defining dependency "timer" 00:04:43.910 Message: lib/compressdev: Defining dependency "compressdev" 00:04:43.910 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:43.910 Message: lib/dmadev: Defining dependency "dmadev" 00:04:43.910 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:43.910 Message: lib/power: Defining dependency "power" 00:04:43.910 Message: lib/reorder: Defining dependency "reorder" 00:04:43.910 Message: lib/security: Defining dependency "security" 00:04:43.910 Has header "linux/userfaultfd.h" : YES 00:04:43.910 Has header "linux/vduse.h" : YES 00:04:43.910 Message: lib/vhost: Defining dependency "vhost" 00:04:43.910 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:43.910 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:43.910 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:43.910 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:43.910 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:43.910 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:43.910 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:43.910 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:43.910 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:43.910 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:43.910 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:43.910 Configuring doxy-api-html.conf using configuration 00:04:43.910 Configuring doxy-api-man.conf using configuration 00:04:43.910 Program mandb found: YES (/usr/bin/mandb) 00:04:43.910 Program sphinx-build found: NO 00:04:43.910 Configuring rte_build_config.h using configuration 00:04:43.910 Message: 00:04:43.910 ================= 00:04:43.910 Applications Enabled 00:04:43.910 ================= 00:04:43.910 00:04:43.910 apps: 00:04:43.910 00:04:43.910 00:04:43.910 Message: 00:04:43.910 ================= 00:04:43.910 Libraries Enabled 00:04:43.910 ================= 00:04:43.910 00:04:43.910 libs: 00:04:43.910 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:43.910 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:43.910 cryptodev, dmadev, power, reorder, security, vhost, 00:04:43.910 00:04:43.910 Message: 00:04:43.910 =============== 00:04:43.910 Drivers Enabled 00:04:43.910 =============== 00:04:43.910 00:04:43.910 common: 00:04:43.910 00:04:43.910 bus: 00:04:43.910 pci, vdev, 00:04:43.910 mempool: 00:04:43.910 ring, 00:04:43.910 dma: 00:04:43.910 00:04:43.910 net: 00:04:43.910 00:04:43.910 crypto: 00:04:43.910 00:04:43.910 compress: 00:04:43.910 00:04:43.910 vdpa: 00:04:43.910 00:04:43.910 00:04:43.910 Message: 00:04:43.910 ================= 00:04:43.910 Content Skipped 00:04:43.910 ================= 00:04:43.910 00:04:43.910 apps: 00:04:43.910 dumpcap: explicitly disabled via build config 00:04:43.910 graph: explicitly disabled via build config 00:04:43.910 pdump: explicitly disabled via build config 00:04:43.910 proc-info: explicitly disabled via build config 00:04:43.910 test-acl: explicitly disabled via build config 00:04:43.910 test-bbdev: explicitly disabled via build config 00:04:43.910 test-cmdline: explicitly disabled via build config 00:04:43.910 test-compress-perf: explicitly disabled via build config 00:04:43.910 test-crypto-perf: explicitly disabled via build config 00:04:43.910 test-dma-perf: explicitly disabled via build config 00:04:43.910 test-eventdev: explicitly disabled via build config 00:04:43.910 test-fib: explicitly disabled via build config 00:04:43.910 test-flow-perf: explicitly disabled via build config 00:04:43.910 test-gpudev: explicitly disabled via build config 00:04:43.910 test-mldev: explicitly disabled via build config 00:04:43.910 test-pipeline: explicitly disabled via build config 00:04:43.910 test-pmd: explicitly disabled via build config 00:04:43.910 test-regex: explicitly disabled via build config 00:04:43.910 test-sad: explicitly disabled via build config 00:04:43.910 test-security-perf: explicitly disabled via build config 00:04:43.910 00:04:43.910 libs: 00:04:43.910 argparse: explicitly disabled via build config 00:04:43.910 metrics: explicitly disabled via build config 00:04:43.910 acl: explicitly disabled via build config 00:04:43.910 bbdev: explicitly disabled via build config 00:04:43.910 bitratestats: explicitly disabled via build config 00:04:43.910 bpf: explicitly disabled via build config 00:04:43.910 cfgfile: explicitly disabled via build config 00:04:43.910 distributor: explicitly disabled via build config 00:04:43.910 efd: explicitly disabled via build config 00:04:43.910 eventdev: explicitly disabled via build config 00:04:43.910 dispatcher: explicitly disabled via build config 00:04:43.910 gpudev: explicitly disabled via build config 00:04:43.910 gro: explicitly disabled via build config 00:04:43.910 gso: explicitly disabled via build config 00:04:43.910 ip_frag: explicitly disabled via build config 00:04:43.910 jobstats: explicitly disabled via build config 00:04:43.910 latencystats: explicitly disabled via build config 00:04:43.910 lpm: explicitly disabled via build config 00:04:43.910 member: explicitly disabled via build config 00:04:43.911 pcapng: explicitly disabled via build config 00:04:43.911 rawdev: explicitly disabled via build config 00:04:43.911 regexdev: explicitly disabled via build config 00:04:43.911 mldev: explicitly disabled via build config 00:04:43.911 rib: explicitly disabled via build config 00:04:43.911 sched: explicitly disabled via build config 00:04:43.911 stack: explicitly disabled via build config 00:04:43.911 ipsec: explicitly disabled via build config 00:04:43.911 pdcp: explicitly disabled via build config 00:04:43.911 fib: explicitly disabled via build config 00:04:43.911 port: explicitly disabled via build config 00:04:43.911 pdump: explicitly disabled via build config 00:04:43.911 table: explicitly disabled via build config 00:04:43.911 pipeline: explicitly disabled via build config 00:04:43.911 graph: explicitly disabled via build config 00:04:43.911 node: explicitly disabled via build config 00:04:43.911 00:04:43.911 drivers: 00:04:43.911 common/cpt: not in enabled drivers build config 00:04:43.911 common/dpaax: not in enabled drivers build config 00:04:43.911 common/iavf: not in enabled drivers build config 00:04:43.911 common/idpf: not in enabled drivers build config 00:04:43.911 common/ionic: not in enabled drivers build config 00:04:43.911 common/mvep: not in enabled drivers build config 00:04:43.911 common/octeontx: not in enabled drivers build config 00:04:43.911 bus/auxiliary: not in enabled drivers build config 00:04:43.911 bus/cdx: not in enabled drivers build config 00:04:43.911 bus/dpaa: not in enabled drivers build config 00:04:43.911 bus/fslmc: not in enabled drivers build config 00:04:43.911 bus/ifpga: not in enabled drivers build config 00:04:43.911 bus/platform: not in enabled drivers build config 00:04:43.911 bus/uacce: not in enabled drivers build config 00:04:43.911 bus/vmbus: not in enabled drivers build config 00:04:43.911 common/cnxk: not in enabled drivers build config 00:04:43.911 common/mlx5: not in enabled drivers build config 00:04:43.911 common/nfp: not in enabled drivers build config 00:04:43.911 common/nitrox: not in enabled drivers build config 00:04:43.911 common/qat: not in enabled drivers build config 00:04:43.911 common/sfc_efx: not in enabled drivers build config 00:04:43.911 mempool/bucket: not in enabled drivers build config 00:04:43.911 mempool/cnxk: not in enabled drivers build config 00:04:43.911 mempool/dpaa: not in enabled drivers build config 00:04:43.911 mempool/dpaa2: not in enabled drivers build config 00:04:43.911 mempool/octeontx: not in enabled drivers build config 00:04:43.911 mempool/stack: not in enabled drivers build config 00:04:43.911 dma/cnxk: not in enabled drivers build config 00:04:43.911 dma/dpaa: not in enabled drivers build config 00:04:43.911 dma/dpaa2: not in enabled drivers build config 00:04:43.911 dma/hisilicon: not in enabled drivers build config 00:04:43.911 dma/idxd: not in enabled drivers build config 00:04:43.911 dma/ioat: not in enabled drivers build config 00:04:43.911 dma/skeleton: not in enabled drivers build config 00:04:43.911 net/af_packet: not in enabled drivers build config 00:04:43.911 net/af_xdp: not in enabled drivers build config 00:04:43.911 net/ark: not in enabled drivers build config 00:04:43.911 net/atlantic: not in enabled drivers build config 00:04:43.911 net/avp: not in enabled drivers build config 00:04:43.911 net/axgbe: not in enabled drivers build config 00:04:43.911 net/bnx2x: not in enabled drivers build config 00:04:43.911 net/bnxt: not in enabled drivers build config 00:04:43.911 net/bonding: not in enabled drivers build config 00:04:43.911 net/cnxk: not in enabled drivers build config 00:04:43.911 net/cpfl: not in enabled drivers build config 00:04:43.911 net/cxgbe: not in enabled drivers build config 00:04:43.911 net/dpaa: not in enabled drivers build config 00:04:43.911 net/dpaa2: not in enabled drivers build config 00:04:43.911 net/e1000: not in enabled drivers build config 00:04:43.911 net/ena: not in enabled drivers build config 00:04:43.911 net/enetc: not in enabled drivers build config 00:04:43.911 net/enetfec: not in enabled drivers build config 00:04:43.911 net/enic: not in enabled drivers build config 00:04:43.911 net/failsafe: not in enabled drivers build config 00:04:43.911 net/fm10k: not in enabled drivers build config 00:04:43.911 net/gve: not in enabled drivers build config 00:04:43.911 net/hinic: not in enabled drivers build config 00:04:43.911 net/hns3: not in enabled drivers build config 00:04:43.911 net/i40e: not in enabled drivers build config 00:04:43.911 net/iavf: not in enabled drivers build config 00:04:43.911 net/ice: not in enabled drivers build config 00:04:43.911 net/idpf: not in enabled drivers build config 00:04:43.911 net/igc: not in enabled drivers build config 00:04:43.911 net/ionic: not in enabled drivers build config 00:04:43.911 net/ipn3ke: not in enabled drivers build config 00:04:43.911 net/ixgbe: not in enabled drivers build config 00:04:43.911 net/mana: not in enabled drivers build config 00:04:43.911 net/memif: not in enabled drivers build config 00:04:43.911 net/mlx4: not in enabled drivers build config 00:04:43.911 net/mlx5: not in enabled drivers build config 00:04:43.911 net/mvneta: not in enabled drivers build config 00:04:43.911 net/mvpp2: not in enabled drivers build config 00:04:43.911 net/netvsc: not in enabled drivers build config 00:04:43.911 net/nfb: not in enabled drivers build config 00:04:43.911 net/nfp: not in enabled drivers build config 00:04:43.911 net/ngbe: not in enabled drivers build config 00:04:43.911 net/null: not in enabled drivers build config 00:04:43.911 net/octeontx: not in enabled drivers build config 00:04:43.911 net/octeon_ep: not in enabled drivers build config 00:04:43.911 net/pcap: not in enabled drivers build config 00:04:43.911 net/pfe: not in enabled drivers build config 00:04:43.911 net/qede: not in enabled drivers build config 00:04:43.911 net/ring: not in enabled drivers build config 00:04:43.911 net/sfc: not in enabled drivers build config 00:04:43.911 net/softnic: not in enabled drivers build config 00:04:43.911 net/tap: not in enabled drivers build config 00:04:43.911 net/thunderx: not in enabled drivers build config 00:04:43.911 net/txgbe: not in enabled drivers build config 00:04:43.911 net/vdev_netvsc: not in enabled drivers build config 00:04:43.911 net/vhost: not in enabled drivers build config 00:04:43.911 net/virtio: not in enabled drivers build config 00:04:43.911 net/vmxnet3: not in enabled drivers build config 00:04:43.911 raw/*: missing internal dependency, "rawdev" 00:04:43.911 crypto/armv8: not in enabled drivers build config 00:04:43.911 crypto/bcmfs: not in enabled drivers build config 00:04:43.911 crypto/caam_jr: not in enabled drivers build config 00:04:43.911 crypto/ccp: not in enabled drivers build config 00:04:43.911 crypto/cnxk: not in enabled drivers build config 00:04:43.911 crypto/dpaa_sec: not in enabled drivers build config 00:04:43.911 crypto/dpaa2_sec: not in enabled drivers build config 00:04:43.911 crypto/ipsec_mb: not in enabled drivers build config 00:04:43.911 crypto/mlx5: not in enabled drivers build config 00:04:43.911 crypto/mvsam: not in enabled drivers build config 00:04:43.911 crypto/nitrox: not in enabled drivers build config 00:04:43.911 crypto/null: not in enabled drivers build config 00:04:43.911 crypto/octeontx: not in enabled drivers build config 00:04:43.911 crypto/openssl: not in enabled drivers build config 00:04:43.911 crypto/scheduler: not in enabled drivers build config 00:04:43.911 crypto/uadk: not in enabled drivers build config 00:04:43.911 crypto/virtio: not in enabled drivers build config 00:04:43.911 compress/isal: not in enabled drivers build config 00:04:43.911 compress/mlx5: not in enabled drivers build config 00:04:43.911 compress/nitrox: not in enabled drivers build config 00:04:43.911 compress/octeontx: not in enabled drivers build config 00:04:43.911 compress/zlib: not in enabled drivers build config 00:04:43.911 regex/*: missing internal dependency, "regexdev" 00:04:43.911 ml/*: missing internal dependency, "mldev" 00:04:43.911 vdpa/ifc: not in enabled drivers build config 00:04:43.911 vdpa/mlx5: not in enabled drivers build config 00:04:43.911 vdpa/nfp: not in enabled drivers build config 00:04:43.911 vdpa/sfc: not in enabled drivers build config 00:04:43.911 event/*: missing internal dependency, "eventdev" 00:04:43.911 baseband/*: missing internal dependency, "bbdev" 00:04:43.911 gpu/*: missing internal dependency, "gpudev" 00:04:43.911 00:04:43.911 00:04:43.911 Build targets in project: 85 00:04:43.911 00:04:43.911 DPDK 24.03.0 00:04:43.911 00:04:43.911 User defined options 00:04:43.911 buildtype : debug 00:04:43.911 default_library : shared 00:04:43.911 libdir : lib 00:04:43.911 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:43.911 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:43.911 c_link_args : 00:04:43.911 cpu_instruction_set: native 00:04:43.911 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:04:43.911 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:04:43.911 enable_docs : false 00:04:43.911 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:43.911 enable_kmods : false 00:04:43.911 max_lcores : 128 00:04:43.911 tests : false 00:04:43.911 00:04:43.911 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:44.184 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:44.184 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:44.184 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:44.184 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:44.184 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:44.184 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:44.184 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:44.184 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:44.184 [8/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:44.184 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:44.451 [10/268] Linking static target lib/librte_kvargs.a 00:04:44.451 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:44.451 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:44.451 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:44.451 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:44.451 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:44.451 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:44.451 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:44.451 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:44.451 [19/268] Linking static target lib/librte_log.a 00:04:44.451 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:44.451 [21/268] Linking static target lib/librte_pci.a 00:04:44.451 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:44.451 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:44.451 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:44.716 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:44.716 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:44.716 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:44.716 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:44.716 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:44.716 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:44.716 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:44.716 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:44.716 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:44.716 [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:44.716 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:44.716 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:44.716 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:44.716 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:44.716 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:44.716 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:44.716 [41/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:44.716 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:44.716 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:44.716 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:44.716 [45/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:44.716 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:44.716 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:44.716 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:44.716 [49/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:44.716 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:44.716 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:44.716 [52/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:44.716 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:44.716 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:44.716 [55/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.716 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:44.716 [57/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:44.716 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:44.716 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:44.716 [60/268] Linking static target lib/librte_meter.a 00:04:44.976 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:44.976 [62/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:44.976 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:44.976 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:44.976 [65/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:44.976 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:44.976 [67/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:44.976 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:44.976 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:44.976 [70/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:44.976 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:44.976 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:44.976 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:44.976 [74/268] Linking static target lib/librte_ring.a 00:04:44.976 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:44.976 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:44.976 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:44.976 [78/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:44.976 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:44.976 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:44.976 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:44.976 [82/268] Linking static target lib/librte_telemetry.a 00:04:44.976 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:44.976 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:44.976 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:44.976 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:44.977 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:44.977 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:44.977 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:44.977 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:44.977 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:44.977 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:44.977 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:44.977 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:44.977 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:44.977 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:44.977 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:44.977 [98/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.977 [99/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:44.977 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:44.977 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:44.977 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:44.977 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:44.977 [104/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:44.977 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:44.977 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:44.977 [107/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:44.977 [108/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:44.977 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:44.977 [110/268] Linking static target lib/librte_rcu.a 00:04:44.977 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:44.977 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:44.977 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:44.977 [114/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:44.977 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:44.977 [116/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:44.977 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:44.977 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:44.977 [119/268] Linking static target lib/librte_mempool.a 00:04:44.977 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:44.977 [121/268] Linking static target lib/librte_net.a 00:04:44.977 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:44.977 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:44.977 [124/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:44.977 [125/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:44.977 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:44.977 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:44.977 [128/268] Linking static target lib/librte_eal.a 00:04:44.977 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:45.236 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:45.236 [131/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.236 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:45.236 [133/268] Linking static target lib/librte_cmdline.a 00:04:45.236 [134/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.236 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:45.236 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:45.236 [137/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:45.236 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.236 [139/268] Linking static target lib/librte_timer.a 00:04:45.236 [140/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:45.236 [141/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:45.236 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:45.236 [143/268] Linking target lib/librte_log.so.24.1 00:04:45.236 [144/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.236 [145/268] Linking static target lib/librte_mbuf.a 00:04:45.236 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:45.236 [147/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:45.236 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:45.236 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:45.236 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:45.236 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:45.236 [152/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.236 [153/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:45.236 [154/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:45.236 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:45.236 [156/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.236 [157/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:45.236 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:45.496 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:45.496 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:45.496 [161/268] Linking static target lib/librte_dmadev.a 00:04:45.496 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:45.496 [163/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:45.496 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:45.496 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:45.496 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:45.496 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:45.496 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:45.496 [169/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:45.496 [170/268] Linking target lib/librte_kvargs.so.24.1 00:04:45.496 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:45.496 [172/268] Linking static target lib/librte_power.a 00:04:45.496 [173/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:45.496 [174/268] Linking target lib/librte_telemetry.so.24.1 00:04:45.496 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:45.496 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:45.496 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:45.496 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:45.496 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:45.496 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:45.496 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:45.496 [182/268] Linking static target lib/librte_compressdev.a 00:04:45.496 [183/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:45.496 [184/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:45.496 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:45.496 [186/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:45.496 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:45.496 [188/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:45.496 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:45.496 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:45.496 [191/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:45.496 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:45.496 [193/268] Linking static target lib/librte_security.a 00:04:45.497 [194/268] Linking static target lib/librte_hash.a 00:04:45.497 [195/268] Linking static target lib/librte_reorder.a 00:04:45.497 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:45.497 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:45.497 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:45.497 [199/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:45.497 [200/268] Linking static target drivers/librte_bus_vdev.a 00:04:45.497 [201/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:45.755 [202/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.755 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:45.756 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:45.756 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:45.756 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:45.756 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:45.756 [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.756 [209/268] Linking static target drivers/librte_bus_pci.a 00:04:45.756 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:45.756 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:45.756 [212/268] Linking static target drivers/librte_mempool_ring.a 00:04:45.756 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:45.756 [214/268] Linking static target lib/librte_cryptodev.a 00:04:46.015 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.015 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.015 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.015 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:46.015 [219/268] Linking static target lib/librte_ethdev.a 00:04:46.015 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.274 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.274 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.274 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:46.274 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.274 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.533 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.533 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.471 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:47.471 [229/268] Linking static target lib/librte_vhost.a 00:04:47.731 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.638 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.918 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.177 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.177 [234/268] Linking target lib/librte_eal.so.24.1 00:04:55.177 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:55.436 [236/268] Linking target lib/librte_ring.so.24.1 00:04:55.436 [237/268] Linking target lib/librte_pci.so.24.1 00:04:55.436 [238/268] Linking target lib/librte_timer.so.24.1 00:04:55.436 [239/268] Linking target lib/librte_meter.so.24.1 00:04:55.436 [240/268] Linking target lib/librte_dmadev.so.24.1 00:04:55.436 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:55.436 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:55.436 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:55.436 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:55.437 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:55.437 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:55.437 [247/268] Linking target lib/librte_rcu.so.24.1 00:04:55.437 [248/268] Linking target lib/librte_mempool.so.24.1 00:04:55.437 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:55.696 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:55.696 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:55.696 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:55.696 [253/268] Linking target lib/librte_mbuf.so.24.1 00:04:55.696 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:55.956 [255/268] Linking target lib/librte_reorder.so.24.1 00:04:55.956 [256/268] Linking target lib/librte_net.so.24.1 00:04:55.956 [257/268] Linking target lib/librte_compressdev.so.24.1 00:04:55.956 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:55.956 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:55.956 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:55.956 [261/268] Linking target lib/librte_hash.so.24.1 00:04:55.956 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:55.956 [263/268] Linking target lib/librte_security.so.24.1 00:04:55.956 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:56.215 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:56.215 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:56.215 [267/268] Linking target lib/librte_power.so.24.1 00:04:56.215 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:56.215 INFO: autodetecting backend as ninja 00:04:56.215 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:05:08.458 CC lib/log/log.o 00:05:08.458 CC lib/log/log_flags.o 00:05:08.458 CC lib/log/log_deprecated.o 00:05:08.458 CC lib/ut/ut.o 00:05:08.458 CC lib/ut_mock/mock.o 00:05:08.458 LIB libspdk_ut_mock.a 00:05:08.458 LIB libspdk_log.a 00:05:08.458 LIB libspdk_ut.a 00:05:08.458 SO libspdk_ut_mock.so.6.0 00:05:08.458 SO libspdk_ut.so.2.0 00:05:08.458 SO libspdk_log.so.7.1 00:05:08.458 SYMLINK libspdk_ut_mock.so 00:05:08.458 SYMLINK libspdk_ut.so 00:05:08.458 SYMLINK libspdk_log.so 00:05:08.458 CC lib/ioat/ioat.o 00:05:08.458 CC lib/dma/dma.o 00:05:08.458 CXX lib/trace_parser/trace.o 00:05:08.459 CC lib/util/base64.o 00:05:08.459 CC lib/util/bit_array.o 00:05:08.459 CC lib/util/cpuset.o 00:05:08.459 CC lib/util/crc16.o 00:05:08.459 CC lib/util/crc32.o 00:05:08.459 CC lib/util/crc32c.o 00:05:08.459 CC lib/util/crc32_ieee.o 00:05:08.459 CC lib/util/crc64.o 00:05:08.459 CC lib/util/dif.o 00:05:08.459 CC lib/util/fd.o 00:05:08.459 CC lib/util/fd_group.o 00:05:08.459 CC lib/util/file.o 00:05:08.459 CC lib/util/hexlify.o 00:05:08.459 CC lib/util/iov.o 00:05:08.459 CC lib/util/math.o 00:05:08.459 CC lib/util/net.o 00:05:08.459 CC lib/util/pipe.o 00:05:08.459 CC lib/util/strerror_tls.o 00:05:08.459 CC lib/util/string.o 00:05:08.459 CC lib/util/uuid.o 00:05:08.459 CC lib/util/xor.o 00:05:08.459 CC lib/util/zipf.o 00:05:08.459 CC lib/util/md5.o 00:05:08.459 CC lib/vfio_user/host/vfio_user.o 00:05:08.459 CC lib/vfio_user/host/vfio_user_pci.o 00:05:08.459 LIB libspdk_dma.a 00:05:08.459 SO libspdk_dma.so.5.0 00:05:08.459 LIB libspdk_ioat.a 00:05:08.459 SYMLINK libspdk_dma.so 00:05:08.459 SO libspdk_ioat.so.7.0 00:05:08.459 SYMLINK libspdk_ioat.so 00:05:08.459 LIB libspdk_vfio_user.a 00:05:08.459 LIB libspdk_util.a 00:05:08.459 SO libspdk_vfio_user.so.5.0 00:05:08.459 SYMLINK libspdk_vfio_user.so 00:05:08.459 SO libspdk_util.so.10.1 00:05:08.459 SYMLINK libspdk_util.so 00:05:08.459 LIB libspdk_trace_parser.a 00:05:08.459 SO libspdk_trace_parser.so.6.0 00:05:08.459 SYMLINK libspdk_trace_parser.so 00:05:08.459 CC lib/vmd/vmd.o 00:05:08.459 CC lib/vmd/led.o 00:05:08.459 CC lib/json/json_parse.o 00:05:08.459 CC lib/conf/conf.o 00:05:08.459 CC lib/idxd/idxd.o 00:05:08.459 CC lib/json/json_util.o 00:05:08.459 CC lib/rdma_utils/rdma_utils.o 00:05:08.459 CC lib/idxd/idxd_user.o 00:05:08.459 CC lib/json/json_write.o 00:05:08.459 CC lib/idxd/idxd_kernel.o 00:05:08.459 CC lib/env_dpdk/env.o 00:05:08.459 CC lib/env_dpdk/memory.o 00:05:08.459 CC lib/env_dpdk/pci.o 00:05:08.459 CC lib/env_dpdk/init.o 00:05:08.459 CC lib/env_dpdk/threads.o 00:05:08.459 CC lib/env_dpdk/pci_ioat.o 00:05:08.459 CC lib/env_dpdk/pci_virtio.o 00:05:08.459 CC lib/env_dpdk/pci_vmd.o 00:05:08.459 CC lib/env_dpdk/pci_idxd.o 00:05:08.459 CC lib/env_dpdk/pci_event.o 00:05:08.459 CC lib/env_dpdk/sigbus_handler.o 00:05:08.459 CC lib/env_dpdk/pci_dpdk.o 00:05:08.459 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:08.459 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:08.459 LIB libspdk_conf.a 00:05:08.459 SO libspdk_conf.so.6.0 00:05:08.459 LIB libspdk_rdma_utils.a 00:05:08.459 LIB libspdk_json.a 00:05:08.459 SYMLINK libspdk_conf.so 00:05:08.459 SO libspdk_rdma_utils.so.1.0 00:05:08.459 SO libspdk_json.so.6.0 00:05:08.459 SYMLINK libspdk_rdma_utils.so 00:05:08.459 SYMLINK libspdk_json.so 00:05:08.459 LIB libspdk_idxd.a 00:05:08.459 LIB libspdk_vmd.a 00:05:08.459 SO libspdk_idxd.so.12.1 00:05:08.718 SO libspdk_vmd.so.6.0 00:05:08.718 SYMLINK libspdk_idxd.so 00:05:08.718 SYMLINK libspdk_vmd.so 00:05:08.718 CC lib/rdma_provider/common.o 00:05:08.718 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:08.718 CC lib/jsonrpc/jsonrpc_server.o 00:05:08.718 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:08.718 CC lib/jsonrpc/jsonrpc_client.o 00:05:08.718 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:08.977 LIB libspdk_rdma_provider.a 00:05:08.977 SO libspdk_rdma_provider.so.7.0 00:05:08.977 LIB libspdk_jsonrpc.a 00:05:08.977 SO libspdk_jsonrpc.so.6.0 00:05:08.977 SYMLINK libspdk_rdma_provider.so 00:05:08.977 LIB libspdk_env_dpdk.a 00:05:08.977 SYMLINK libspdk_jsonrpc.so 00:05:09.237 SO libspdk_env_dpdk.so.15.1 00:05:09.237 SYMLINK libspdk_env_dpdk.so 00:05:09.496 CC lib/rpc/rpc.o 00:05:09.496 LIB libspdk_rpc.a 00:05:09.496 SO libspdk_rpc.so.6.0 00:05:09.756 SYMLINK libspdk_rpc.so 00:05:10.016 CC lib/notify/notify.o 00:05:10.016 CC lib/notify/notify_rpc.o 00:05:10.016 CC lib/trace/trace.o 00:05:10.016 CC lib/keyring/keyring.o 00:05:10.016 CC lib/keyring/keyring_rpc.o 00:05:10.016 CC lib/trace/trace_flags.o 00:05:10.016 CC lib/trace/trace_rpc.o 00:05:10.275 LIB libspdk_notify.a 00:05:10.275 SO libspdk_notify.so.6.0 00:05:10.275 LIB libspdk_keyring.a 00:05:10.275 LIB libspdk_trace.a 00:05:10.275 SO libspdk_keyring.so.2.0 00:05:10.275 SYMLINK libspdk_notify.so 00:05:10.275 SO libspdk_trace.so.11.0 00:05:10.275 SYMLINK libspdk_keyring.so 00:05:10.275 SYMLINK libspdk_trace.so 00:05:10.535 CC lib/thread/thread.o 00:05:10.535 CC lib/thread/iobuf.o 00:05:10.535 CC lib/sock/sock.o 00:05:10.535 CC lib/sock/sock_rpc.o 00:05:11.105 LIB libspdk_sock.a 00:05:11.105 SO libspdk_sock.so.10.0 00:05:11.105 SYMLINK libspdk_sock.so 00:05:11.364 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:11.364 CC lib/nvme/nvme_ctrlr.o 00:05:11.364 CC lib/nvme/nvme_fabric.o 00:05:11.364 CC lib/nvme/nvme_ns_cmd.o 00:05:11.364 CC lib/nvme/nvme_ns.o 00:05:11.364 CC lib/nvme/nvme_pcie_common.o 00:05:11.364 CC lib/nvme/nvme_pcie.o 00:05:11.364 CC lib/nvme/nvme_qpair.o 00:05:11.364 CC lib/nvme/nvme.o 00:05:11.364 CC lib/nvme/nvme_quirks.o 00:05:11.364 CC lib/nvme/nvme_transport.o 00:05:11.364 CC lib/nvme/nvme_discovery.o 00:05:11.364 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:11.364 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:11.364 CC lib/nvme/nvme_tcp.o 00:05:11.364 CC lib/nvme/nvme_opal.o 00:05:11.364 CC lib/nvme/nvme_io_msg.o 00:05:11.364 CC lib/nvme/nvme_poll_group.o 00:05:11.364 CC lib/nvme/nvme_zns.o 00:05:11.364 CC lib/nvme/nvme_stubs.o 00:05:11.364 CC lib/nvme/nvme_auth.o 00:05:11.364 CC lib/nvme/nvme_cuse.o 00:05:11.364 CC lib/nvme/nvme_vfio_user.o 00:05:11.364 CC lib/nvme/nvme_rdma.o 00:05:11.622 LIB libspdk_thread.a 00:05:11.622 SO libspdk_thread.so.11.0 00:05:11.881 SYMLINK libspdk_thread.so 00:05:12.141 CC lib/accel/accel.o 00:05:12.141 CC lib/accel/accel_rpc.o 00:05:12.141 CC lib/accel/accel_sw.o 00:05:12.141 CC lib/blob/request.o 00:05:12.141 CC lib/blob/blobstore.o 00:05:12.141 CC lib/blob/zeroes.o 00:05:12.141 CC lib/blob/blob_bs_dev.o 00:05:12.141 CC lib/init/json_config.o 00:05:12.141 CC lib/init/subsystem.o 00:05:12.141 CC lib/init/rpc.o 00:05:12.141 CC lib/init/subsystem_rpc.o 00:05:12.141 CC lib/fsdev/fsdev.o 00:05:12.141 CC lib/fsdev/fsdev_io.o 00:05:12.141 CC lib/fsdev/fsdev_rpc.o 00:05:12.141 CC lib/vfu_tgt/tgt_endpoint.o 00:05:12.141 CC lib/virtio/virtio.o 00:05:12.141 CC lib/virtio/virtio_vhost_user.o 00:05:12.141 CC lib/vfu_tgt/tgt_rpc.o 00:05:12.141 CC lib/virtio/virtio_vfio_user.o 00:05:12.141 CC lib/virtio/virtio_pci.o 00:05:12.400 LIB libspdk_init.a 00:05:12.400 SO libspdk_init.so.6.0 00:05:12.400 LIB libspdk_virtio.a 00:05:12.400 LIB libspdk_vfu_tgt.a 00:05:12.400 SYMLINK libspdk_init.so 00:05:12.400 SO libspdk_vfu_tgt.so.3.0 00:05:12.400 SO libspdk_virtio.so.7.0 00:05:12.400 SYMLINK libspdk_vfu_tgt.so 00:05:12.400 SYMLINK libspdk_virtio.so 00:05:12.659 LIB libspdk_fsdev.a 00:05:12.659 SO libspdk_fsdev.so.2.0 00:05:12.659 CC lib/event/app.o 00:05:12.659 CC lib/event/reactor.o 00:05:12.659 CC lib/event/log_rpc.o 00:05:12.659 CC lib/event/app_rpc.o 00:05:12.659 CC lib/event/scheduler_static.o 00:05:12.659 SYMLINK libspdk_fsdev.so 00:05:12.919 LIB libspdk_accel.a 00:05:12.919 SO libspdk_accel.so.16.0 00:05:12.919 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:12.919 SYMLINK libspdk_accel.so 00:05:13.178 LIB libspdk_event.a 00:05:13.178 LIB libspdk_nvme.a 00:05:13.178 SO libspdk_event.so.14.0 00:05:13.178 SYMLINK libspdk_event.so 00:05:13.178 SO libspdk_nvme.so.15.0 00:05:13.437 CC lib/bdev/bdev.o 00:05:13.437 CC lib/bdev/bdev_rpc.o 00:05:13.437 CC lib/bdev/bdev_zone.o 00:05:13.437 CC lib/bdev/part.o 00:05:13.437 CC lib/bdev/scsi_nvme.o 00:05:13.437 SYMLINK libspdk_nvme.so 00:05:13.437 LIB libspdk_fuse_dispatcher.a 00:05:13.437 SO libspdk_fuse_dispatcher.so.1.0 00:05:13.697 SYMLINK libspdk_fuse_dispatcher.so 00:05:14.266 LIB libspdk_blob.a 00:05:14.266 SO libspdk_blob.so.11.0 00:05:14.266 SYMLINK libspdk_blob.so 00:05:14.836 CC lib/lvol/lvol.o 00:05:14.836 CC lib/blobfs/tree.o 00:05:14.836 CC lib/blobfs/blobfs.o 00:05:15.096 LIB libspdk_bdev.a 00:05:15.096 SO libspdk_bdev.so.17.0 00:05:15.356 SYMLINK libspdk_bdev.so 00:05:15.356 LIB libspdk_blobfs.a 00:05:15.356 SO libspdk_blobfs.so.10.0 00:05:15.356 LIB libspdk_lvol.a 00:05:15.356 SYMLINK libspdk_blobfs.so 00:05:15.356 SO libspdk_lvol.so.10.0 00:05:15.356 SYMLINK libspdk_lvol.so 00:05:15.617 CC lib/scsi/dev.o 00:05:15.617 CC lib/scsi/lun.o 00:05:15.617 CC lib/scsi/port.o 00:05:15.617 CC lib/nvmf/ctrlr_discovery.o 00:05:15.617 CC lib/scsi/scsi.o 00:05:15.617 CC lib/nvmf/ctrlr.o 00:05:15.617 CC lib/scsi/scsi_bdev.o 00:05:15.617 CC lib/nvmf/ctrlr_bdev.o 00:05:15.617 CC lib/scsi/scsi_pr.o 00:05:15.617 CC lib/nvmf/subsystem.o 00:05:15.617 CC lib/scsi/scsi_rpc.o 00:05:15.617 CC lib/nvmf/nvmf.o 00:05:15.617 CC lib/scsi/task.o 00:05:15.617 CC lib/nvmf/nvmf_rpc.o 00:05:15.617 CC lib/nbd/nbd.o 00:05:15.617 CC lib/nvmf/transport.o 00:05:15.617 CC lib/ublk/ublk.o 00:05:15.617 CC lib/nbd/nbd_rpc.o 00:05:15.617 CC lib/nvmf/tcp.o 00:05:15.617 CC lib/ublk/ublk_rpc.o 00:05:15.617 CC lib/nvmf/stubs.o 00:05:15.617 CC lib/nvmf/mdns_server.o 00:05:15.617 CC lib/nvmf/vfio_user.o 00:05:15.617 CC lib/ftl/ftl_core.o 00:05:15.617 CC lib/ftl/ftl_init.o 00:05:15.617 CC lib/nvmf/rdma.o 00:05:15.617 CC lib/nvmf/auth.o 00:05:15.617 CC lib/ftl/ftl_layout.o 00:05:15.617 CC lib/ftl/ftl_debug.o 00:05:15.617 CC lib/ftl/ftl_io.o 00:05:15.617 CC lib/ftl/ftl_sb.o 00:05:15.617 CC lib/ftl/ftl_l2p.o 00:05:15.617 CC lib/ftl/ftl_l2p_flat.o 00:05:15.617 CC lib/ftl/ftl_nv_cache.o 00:05:15.617 CC lib/ftl/ftl_band.o 00:05:15.617 CC lib/ftl/ftl_band_ops.o 00:05:15.617 CC lib/ftl/ftl_writer.o 00:05:15.617 CC lib/ftl/ftl_rq.o 00:05:15.617 CC lib/ftl/ftl_reloc.o 00:05:15.617 CC lib/ftl/ftl_l2p_cache.o 00:05:15.617 CC lib/ftl/ftl_p2l.o 00:05:15.617 CC lib/ftl/ftl_p2l_log.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:15.617 CC lib/ftl/utils/ftl_conf.o 00:05:15.617 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:15.617 CC lib/ftl/utils/ftl_mempool.o 00:05:15.617 CC lib/ftl/utils/ftl_md.o 00:05:15.617 CC lib/ftl/utils/ftl_property.o 00:05:15.617 CC lib/ftl/utils/ftl_bitmap.o 00:05:15.617 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:15.617 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:15.617 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:15.617 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:15.617 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:15.617 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:15.617 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:15.617 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:15.617 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:15.617 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:15.617 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:15.617 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:15.617 CC lib/ftl/base/ftl_base_dev.o 00:05:15.617 CC lib/ftl/base/ftl_base_bdev.o 00:05:15.617 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:15.617 CC lib/ftl/ftl_trace.o 00:05:16.187 LIB libspdk_nbd.a 00:05:16.187 SO libspdk_nbd.so.7.0 00:05:16.187 LIB libspdk_scsi.a 00:05:16.187 SYMLINK libspdk_nbd.so 00:05:16.187 SO libspdk_scsi.so.9.0 00:05:16.447 LIB libspdk_ublk.a 00:05:16.447 SYMLINK libspdk_scsi.so 00:05:16.447 SO libspdk_ublk.so.3.0 00:05:16.447 SYMLINK libspdk_ublk.so 00:05:16.706 CC lib/iscsi/conn.o 00:05:16.706 CC lib/iscsi/init_grp.o 00:05:16.706 CC lib/iscsi/iscsi.o 00:05:16.706 CC lib/iscsi/param.o 00:05:16.706 CC lib/iscsi/portal_grp.o 00:05:16.706 CC lib/iscsi/tgt_node.o 00:05:16.706 CC lib/iscsi/iscsi_subsystem.o 00:05:16.706 CC lib/iscsi/iscsi_rpc.o 00:05:16.706 CC lib/iscsi/task.o 00:05:16.706 CC lib/vhost/vhost.o 00:05:16.706 CC lib/vhost/vhost_rpc.o 00:05:16.706 CC lib/vhost/vhost_scsi.o 00:05:16.706 CC lib/vhost/vhost_blk.o 00:05:16.706 CC lib/vhost/rte_vhost_user.o 00:05:16.706 LIB libspdk_ftl.a 00:05:16.965 SO libspdk_ftl.so.9.0 00:05:17.224 SYMLINK libspdk_ftl.so 00:05:17.224 LIB libspdk_nvmf.a 00:05:17.483 SO libspdk_nvmf.so.20.0 00:05:17.483 LIB libspdk_vhost.a 00:05:17.483 SYMLINK libspdk_nvmf.so 00:05:17.483 SO libspdk_vhost.so.8.0 00:05:17.744 SYMLINK libspdk_vhost.so 00:05:17.744 LIB libspdk_iscsi.a 00:05:17.744 SO libspdk_iscsi.so.8.0 00:05:18.005 SYMLINK libspdk_iscsi.so 00:05:18.266 CC module/env_dpdk/env_dpdk_rpc.o 00:05:18.526 CC module/vfu_device/vfu_virtio.o 00:05:18.526 CC module/vfu_device/vfu_virtio_blk.o 00:05:18.526 CC module/vfu_device/vfu_virtio_scsi.o 00:05:18.526 CC module/vfu_device/vfu_virtio_rpc.o 00:05:18.526 CC module/vfu_device/vfu_virtio_fs.o 00:05:18.526 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:18.526 LIB libspdk_env_dpdk_rpc.a 00:05:18.526 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:18.526 CC module/accel/iaa/accel_iaa.o 00:05:18.527 CC module/keyring/linux/keyring.o 00:05:18.527 CC module/keyring/linux/keyring_rpc.o 00:05:18.527 CC module/accel/iaa/accel_iaa_rpc.o 00:05:18.527 CC module/accel/error/accel_error_rpc.o 00:05:18.527 CC module/accel/error/accel_error.o 00:05:18.527 CC module/sock/posix/posix.o 00:05:18.527 CC module/accel/ioat/accel_ioat.o 00:05:18.527 CC module/scheduler/gscheduler/gscheduler.o 00:05:18.527 CC module/accel/dsa/accel_dsa.o 00:05:18.527 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:18.527 CC module/accel/ioat/accel_ioat_rpc.o 00:05:18.527 CC module/fsdev/aio/fsdev_aio.o 00:05:18.527 CC module/accel/dsa/accel_dsa_rpc.o 00:05:18.527 CC module/fsdev/aio/linux_aio_mgr.o 00:05:18.527 CC module/keyring/file/keyring.o 00:05:18.527 CC module/blob/bdev/blob_bdev.o 00:05:18.527 CC module/keyring/file/keyring_rpc.o 00:05:18.527 SO libspdk_env_dpdk_rpc.so.6.0 00:05:18.527 SYMLINK libspdk_env_dpdk_rpc.so 00:05:18.786 LIB libspdk_keyring_linux.a 00:05:18.786 LIB libspdk_scheduler_dpdk_governor.a 00:05:18.786 LIB libspdk_keyring_file.a 00:05:18.786 LIB libspdk_scheduler_gscheduler.a 00:05:18.786 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:18.786 LIB libspdk_accel_iaa.a 00:05:18.786 SO libspdk_keyring_linux.so.1.0 00:05:18.786 LIB libspdk_scheduler_dynamic.a 00:05:18.786 SO libspdk_scheduler_gscheduler.so.4.0 00:05:18.786 SO libspdk_keyring_file.so.2.0 00:05:18.786 LIB libspdk_accel_ioat.a 00:05:18.786 LIB libspdk_accel_error.a 00:05:18.786 SO libspdk_scheduler_dynamic.so.4.0 00:05:18.786 SO libspdk_accel_iaa.so.3.0 00:05:18.786 SO libspdk_accel_ioat.so.6.0 00:05:18.786 SYMLINK libspdk_keyring_linux.so 00:05:18.786 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:18.786 SYMLINK libspdk_scheduler_gscheduler.so 00:05:18.786 SO libspdk_accel_error.so.2.0 00:05:18.786 SYMLINK libspdk_keyring_file.so 00:05:18.786 LIB libspdk_blob_bdev.a 00:05:18.786 LIB libspdk_accel_dsa.a 00:05:18.786 SYMLINK libspdk_scheduler_dynamic.so 00:05:18.786 SYMLINK libspdk_accel_iaa.so 00:05:18.786 SYMLINK libspdk_accel_ioat.so 00:05:18.786 SO libspdk_blob_bdev.so.11.0 00:05:18.786 SO libspdk_accel_dsa.so.5.0 00:05:18.786 SYMLINK libspdk_accel_error.so 00:05:19.046 SYMLINK libspdk_blob_bdev.so 00:05:19.046 SYMLINK libspdk_accel_dsa.so 00:05:19.046 LIB libspdk_vfu_device.a 00:05:19.046 SO libspdk_vfu_device.so.3.0 00:05:19.046 SYMLINK libspdk_vfu_device.so 00:05:19.046 LIB libspdk_fsdev_aio.a 00:05:19.306 SO libspdk_fsdev_aio.so.1.0 00:05:19.306 LIB libspdk_sock_posix.a 00:05:19.306 SO libspdk_sock_posix.so.6.0 00:05:19.306 SYMLINK libspdk_fsdev_aio.so 00:05:19.306 SYMLINK libspdk_sock_posix.so 00:05:19.306 CC module/bdev/gpt/gpt.o 00:05:19.306 CC module/bdev/gpt/vbdev_gpt.o 00:05:19.306 CC module/bdev/delay/vbdev_delay.o 00:05:19.306 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:19.306 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:19.306 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:19.306 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:19.306 CC module/bdev/passthru/vbdev_passthru.o 00:05:19.306 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:19.306 CC module/bdev/malloc/bdev_malloc.o 00:05:19.306 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:19.306 CC module/bdev/null/bdev_null.o 00:05:19.306 CC module/bdev/null/bdev_null_rpc.o 00:05:19.306 CC module/bdev/iscsi/bdev_iscsi.o 00:05:19.306 CC module/bdev/aio/bdev_aio_rpc.o 00:05:19.306 CC module/bdev/aio/bdev_aio.o 00:05:19.306 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:19.306 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:19.306 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:19.306 CC module/blobfs/bdev/blobfs_bdev.o 00:05:19.306 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:19.306 CC module/bdev/ftl/bdev_ftl.o 00:05:19.306 CC module/bdev/nvme/nvme_rpc.o 00:05:19.306 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:19.306 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:19.306 CC module/bdev/nvme/bdev_nvme.o 00:05:19.306 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:19.306 CC module/bdev/nvme/bdev_mdns_client.o 00:05:19.306 CC module/bdev/nvme/vbdev_opal.o 00:05:19.306 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:19.306 CC module/bdev/raid/bdev_raid.o 00:05:19.306 CC module/bdev/raid/bdev_raid_rpc.o 00:05:19.306 CC module/bdev/raid/bdev_raid_sb.o 00:05:19.306 CC module/bdev/raid/raid1.o 00:05:19.306 CC module/bdev/error/vbdev_error_rpc.o 00:05:19.306 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:19.306 CC module/bdev/error/vbdev_error.o 00:05:19.306 CC module/bdev/raid/raid0.o 00:05:19.306 CC module/bdev/split/vbdev_split.o 00:05:19.306 CC module/bdev/lvol/vbdev_lvol.o 00:05:19.306 CC module/bdev/split/vbdev_split_rpc.o 00:05:19.306 CC module/bdev/raid/concat.o 00:05:19.566 LIB libspdk_blobfs_bdev.a 00:05:19.566 SO libspdk_blobfs_bdev.so.6.0 00:05:19.566 LIB libspdk_bdev_gpt.a 00:05:19.864 LIB libspdk_bdev_split.a 00:05:19.864 LIB libspdk_bdev_null.a 00:05:19.864 SO libspdk_bdev_gpt.so.6.0 00:05:19.864 SO libspdk_bdev_split.so.6.0 00:05:19.864 SO libspdk_bdev_null.so.6.0 00:05:19.864 LIB libspdk_bdev_error.a 00:05:19.864 SYMLINK libspdk_blobfs_bdev.so 00:05:19.864 LIB libspdk_bdev_ftl.a 00:05:19.864 SO libspdk_bdev_ftl.so.6.0 00:05:19.864 LIB libspdk_bdev_delay.a 00:05:19.865 LIB libspdk_bdev_passthru.a 00:05:19.865 SO libspdk_bdev_error.so.6.0 00:05:19.865 SYMLINK libspdk_bdev_gpt.so 00:05:19.865 SYMLINK libspdk_bdev_split.so 00:05:19.865 SYMLINK libspdk_bdev_null.so 00:05:19.865 SO libspdk_bdev_passthru.so.6.0 00:05:19.865 SO libspdk_bdev_delay.so.6.0 00:05:19.865 LIB libspdk_bdev_zone_block.a 00:05:19.865 LIB libspdk_bdev_iscsi.a 00:05:19.865 LIB libspdk_bdev_aio.a 00:05:19.865 LIB libspdk_bdev_malloc.a 00:05:19.865 SYMLINK libspdk_bdev_ftl.so 00:05:19.865 SYMLINK libspdk_bdev_error.so 00:05:19.865 SO libspdk_bdev_iscsi.so.6.0 00:05:19.865 SO libspdk_bdev_zone_block.so.6.0 00:05:19.865 SO libspdk_bdev_aio.so.6.0 00:05:19.865 SYMLINK libspdk_bdev_delay.so 00:05:19.865 SYMLINK libspdk_bdev_passthru.so 00:05:19.865 SO libspdk_bdev_malloc.so.6.0 00:05:19.865 SYMLINK libspdk_bdev_iscsi.so 00:05:19.865 SYMLINK libspdk_bdev_zone_block.so 00:05:19.865 SYMLINK libspdk_bdev_aio.so 00:05:19.865 SYMLINK libspdk_bdev_malloc.so 00:05:19.865 LIB libspdk_bdev_virtio.a 00:05:19.865 LIB libspdk_bdev_lvol.a 00:05:19.865 SO libspdk_bdev_virtio.so.6.0 00:05:19.865 SO libspdk_bdev_lvol.so.6.0 00:05:20.124 SYMLINK libspdk_bdev_lvol.so 00:05:20.124 SYMLINK libspdk_bdev_virtio.so 00:05:20.384 LIB libspdk_bdev_raid.a 00:05:20.384 SO libspdk_bdev_raid.so.6.0 00:05:20.384 SYMLINK libspdk_bdev_raid.so 00:05:21.323 LIB libspdk_bdev_nvme.a 00:05:21.323 SO libspdk_bdev_nvme.so.7.1 00:05:21.584 SYMLINK libspdk_bdev_nvme.so 00:05:22.156 CC module/event/subsystems/vmd/vmd.o 00:05:22.156 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:22.156 CC module/event/subsystems/iobuf/iobuf.o 00:05:22.156 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:22.156 CC module/event/subsystems/sock/sock.o 00:05:22.156 CC module/event/subsystems/fsdev/fsdev.o 00:05:22.156 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:22.156 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:22.156 CC module/event/subsystems/scheduler/scheduler.o 00:05:22.156 CC module/event/subsystems/keyring/keyring.o 00:05:22.156 LIB libspdk_event_keyring.a 00:05:22.156 LIB libspdk_event_vmd.a 00:05:22.156 LIB libspdk_event_scheduler.a 00:05:22.156 LIB libspdk_event_fsdev.a 00:05:22.156 LIB libspdk_event_vhost_blk.a 00:05:22.156 LIB libspdk_event_sock.a 00:05:22.156 LIB libspdk_event_iobuf.a 00:05:22.156 LIB libspdk_event_vfu_tgt.a 00:05:22.156 SO libspdk_event_keyring.so.1.0 00:05:22.416 SO libspdk_event_sock.so.5.0 00:05:22.416 SO libspdk_event_vmd.so.6.0 00:05:22.416 SO libspdk_event_fsdev.so.1.0 00:05:22.416 SO libspdk_event_scheduler.so.4.0 00:05:22.416 SO libspdk_event_vhost_blk.so.3.0 00:05:22.416 SO libspdk_event_iobuf.so.3.0 00:05:22.416 SO libspdk_event_vfu_tgt.so.3.0 00:05:22.416 SYMLINK libspdk_event_keyring.so 00:05:22.416 SYMLINK libspdk_event_vmd.so 00:05:22.416 SYMLINK libspdk_event_sock.so 00:05:22.416 SYMLINK libspdk_event_fsdev.so 00:05:22.416 SYMLINK libspdk_event_vhost_blk.so 00:05:22.416 SYMLINK libspdk_event_iobuf.so 00:05:22.416 SYMLINK libspdk_event_scheduler.so 00:05:22.416 SYMLINK libspdk_event_vfu_tgt.so 00:05:22.679 CC module/event/subsystems/accel/accel.o 00:05:22.964 LIB libspdk_event_accel.a 00:05:22.964 SO libspdk_event_accel.so.6.0 00:05:22.964 SYMLINK libspdk_event_accel.so 00:05:23.267 CC module/event/subsystems/bdev/bdev.o 00:05:23.267 LIB libspdk_event_bdev.a 00:05:23.613 SO libspdk_event_bdev.so.6.0 00:05:23.613 SYMLINK libspdk_event_bdev.so 00:05:23.872 CC module/event/subsystems/scsi/scsi.o 00:05:23.872 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:23.872 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:23.872 CC module/event/subsystems/nbd/nbd.o 00:05:23.872 CC module/event/subsystems/ublk/ublk.o 00:05:23.872 LIB libspdk_event_scsi.a 00:05:23.872 LIB libspdk_event_ublk.a 00:05:23.872 LIB libspdk_event_nbd.a 00:05:23.872 SO libspdk_event_ublk.so.3.0 00:05:23.872 SO libspdk_event_nbd.so.6.0 00:05:23.872 SO libspdk_event_scsi.so.6.0 00:05:23.872 LIB libspdk_event_nvmf.a 00:05:24.131 SO libspdk_event_nvmf.so.6.0 00:05:24.131 SYMLINK libspdk_event_ublk.so 00:05:24.131 SYMLINK libspdk_event_nbd.so 00:05:24.131 SYMLINK libspdk_event_scsi.so 00:05:24.131 SYMLINK libspdk_event_nvmf.so 00:05:24.390 CC module/event/subsystems/iscsi/iscsi.o 00:05:24.390 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:24.390 LIB libspdk_event_vhost_scsi.a 00:05:24.390 LIB libspdk_event_iscsi.a 00:05:24.648 SO libspdk_event_vhost_scsi.so.3.0 00:05:24.648 SO libspdk_event_iscsi.so.6.0 00:05:24.648 SYMLINK libspdk_event_vhost_scsi.so 00:05:24.648 SYMLINK libspdk_event_iscsi.so 00:05:24.648 SO libspdk.so.6.0 00:05:24.648 SYMLINK libspdk.so 00:05:25.223 CC app/spdk_nvme_discover/discovery_aer.o 00:05:25.223 CC test/rpc_client/rpc_client_test.o 00:05:25.223 CXX app/trace/trace.o 00:05:25.223 TEST_HEADER include/spdk/assert.h 00:05:25.223 CC app/spdk_lspci/spdk_lspci.o 00:05:25.223 TEST_HEADER include/spdk/accel_module.h 00:05:25.223 TEST_HEADER include/spdk/accel.h 00:05:25.223 TEST_HEADER include/spdk/base64.h 00:05:25.223 TEST_HEADER include/spdk/barrier.h 00:05:25.223 CC app/spdk_nvme_identify/identify.o 00:05:25.223 TEST_HEADER include/spdk/bdev.h 00:05:25.223 TEST_HEADER include/spdk/bdev_module.h 00:05:25.223 TEST_HEADER include/spdk/bit_array.h 00:05:25.223 TEST_HEADER include/spdk/bdev_zone.h 00:05:25.223 TEST_HEADER include/spdk/bit_pool.h 00:05:25.223 TEST_HEADER include/spdk/blob_bdev.h 00:05:25.223 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:25.223 CC app/spdk_top/spdk_top.o 00:05:25.223 TEST_HEADER include/spdk/blobfs.h 00:05:25.223 CC app/spdk_nvme_perf/perf.o 00:05:25.223 TEST_HEADER include/spdk/conf.h 00:05:25.223 TEST_HEADER include/spdk/blob.h 00:05:25.223 TEST_HEADER include/spdk/config.h 00:05:25.223 TEST_HEADER include/spdk/cpuset.h 00:05:25.223 TEST_HEADER include/spdk/crc32.h 00:05:25.223 TEST_HEADER include/spdk/crc64.h 00:05:25.223 TEST_HEADER include/spdk/crc16.h 00:05:25.223 CC app/trace_record/trace_record.o 00:05:25.223 TEST_HEADER include/spdk/dif.h 00:05:25.223 TEST_HEADER include/spdk/endian.h 00:05:25.223 TEST_HEADER include/spdk/env_dpdk.h 00:05:25.223 TEST_HEADER include/spdk/dma.h 00:05:25.223 TEST_HEADER include/spdk/env.h 00:05:25.223 TEST_HEADER include/spdk/event.h 00:05:25.223 TEST_HEADER include/spdk/fd.h 00:05:25.223 TEST_HEADER include/spdk/file.h 00:05:25.223 TEST_HEADER include/spdk/fd_group.h 00:05:25.223 TEST_HEADER include/spdk/fsdev.h 00:05:25.223 TEST_HEADER include/spdk/fsdev_module.h 00:05:25.223 TEST_HEADER include/spdk/ftl.h 00:05:25.223 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:25.223 TEST_HEADER include/spdk/hexlify.h 00:05:25.223 TEST_HEADER include/spdk/histogram_data.h 00:05:25.223 TEST_HEADER include/spdk/gpt_spec.h 00:05:25.223 TEST_HEADER include/spdk/idxd.h 00:05:25.223 TEST_HEADER include/spdk/idxd_spec.h 00:05:25.224 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:25.224 TEST_HEADER include/spdk/ioat.h 00:05:25.224 TEST_HEADER include/spdk/ioat_spec.h 00:05:25.224 TEST_HEADER include/spdk/init.h 00:05:25.224 TEST_HEADER include/spdk/iscsi_spec.h 00:05:25.224 TEST_HEADER include/spdk/json.h 00:05:25.224 TEST_HEADER include/spdk/jsonrpc.h 00:05:25.224 TEST_HEADER include/spdk/keyring.h 00:05:25.224 TEST_HEADER include/spdk/keyring_module.h 00:05:25.224 TEST_HEADER include/spdk/log.h 00:05:25.224 TEST_HEADER include/spdk/likely.h 00:05:25.224 TEST_HEADER include/spdk/md5.h 00:05:25.224 TEST_HEADER include/spdk/memory.h 00:05:25.224 TEST_HEADER include/spdk/lvol.h 00:05:25.224 TEST_HEADER include/spdk/net.h 00:05:25.224 TEST_HEADER include/spdk/nbd.h 00:05:25.224 TEST_HEADER include/spdk/mmio.h 00:05:25.224 TEST_HEADER include/spdk/notify.h 00:05:25.224 TEST_HEADER include/spdk/nvme.h 00:05:25.224 TEST_HEADER include/spdk/nvme_intel.h 00:05:25.224 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:25.224 TEST_HEADER include/spdk/nvme_spec.h 00:05:25.224 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:25.224 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:25.224 CC app/nvmf_tgt/nvmf_main.o 00:05:25.224 TEST_HEADER include/spdk/nvme_zns.h 00:05:25.224 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:25.224 TEST_HEADER include/spdk/nvmf.h 00:05:25.224 TEST_HEADER include/spdk/nvmf_spec.h 00:05:25.224 TEST_HEADER include/spdk/opal.h 00:05:25.224 CC app/spdk_dd/spdk_dd.o 00:05:25.224 TEST_HEADER include/spdk/opal_spec.h 00:05:25.224 TEST_HEADER include/spdk/nvmf_transport.h 00:05:25.224 TEST_HEADER include/spdk/pci_ids.h 00:05:25.224 CC app/iscsi_tgt/iscsi_tgt.o 00:05:25.224 TEST_HEADER include/spdk/pipe.h 00:05:25.224 TEST_HEADER include/spdk/reduce.h 00:05:25.224 TEST_HEADER include/spdk/queue.h 00:05:25.224 TEST_HEADER include/spdk/scheduler.h 00:05:25.224 TEST_HEADER include/spdk/scsi.h 00:05:25.224 TEST_HEADER include/spdk/rpc.h 00:05:25.224 TEST_HEADER include/spdk/scsi_spec.h 00:05:25.224 TEST_HEADER include/spdk/string.h 00:05:25.224 TEST_HEADER include/spdk/stdinc.h 00:05:25.224 TEST_HEADER include/spdk/sock.h 00:05:25.224 TEST_HEADER include/spdk/thread.h 00:05:25.224 TEST_HEADER include/spdk/trace.h 00:05:25.224 TEST_HEADER include/spdk/trace_parser.h 00:05:25.224 TEST_HEADER include/spdk/ublk.h 00:05:25.224 TEST_HEADER include/spdk/tree.h 00:05:25.224 TEST_HEADER include/spdk/util.h 00:05:25.224 TEST_HEADER include/spdk/uuid.h 00:05:25.224 TEST_HEADER include/spdk/version.h 00:05:25.224 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:25.224 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:25.224 TEST_HEADER include/spdk/vmd.h 00:05:25.224 TEST_HEADER include/spdk/xor.h 00:05:25.224 TEST_HEADER include/spdk/vhost.h 00:05:25.224 TEST_HEADER include/spdk/zipf.h 00:05:25.224 CXX test/cpp_headers/accel.o 00:05:25.224 CXX test/cpp_headers/accel_module.o 00:05:25.224 CXX test/cpp_headers/barrier.o 00:05:25.224 CXX test/cpp_headers/base64.o 00:05:25.224 CC app/spdk_tgt/spdk_tgt.o 00:05:25.224 CXX test/cpp_headers/assert.o 00:05:25.224 CXX test/cpp_headers/bdev.o 00:05:25.224 CXX test/cpp_headers/bdev_module.o 00:05:25.224 CXX test/cpp_headers/bdev_zone.o 00:05:25.224 CXX test/cpp_headers/bit_array.o 00:05:25.224 CXX test/cpp_headers/bit_pool.o 00:05:25.224 CXX test/cpp_headers/blob_bdev.o 00:05:25.224 CXX test/cpp_headers/blobfs.o 00:05:25.224 CXX test/cpp_headers/blob.o 00:05:25.224 CXX test/cpp_headers/blobfs_bdev.o 00:05:25.224 CXX test/cpp_headers/conf.o 00:05:25.224 CXX test/cpp_headers/cpuset.o 00:05:25.224 CXX test/cpp_headers/crc16.o 00:05:25.224 CXX test/cpp_headers/config.o 00:05:25.224 CXX test/cpp_headers/dif.o 00:05:25.224 CXX test/cpp_headers/crc32.o 00:05:25.224 CXX test/cpp_headers/dma.o 00:05:25.224 CXX test/cpp_headers/crc64.o 00:05:25.224 CXX test/cpp_headers/endian.o 00:05:25.224 CXX test/cpp_headers/env_dpdk.o 00:05:25.224 CXX test/cpp_headers/event.o 00:05:25.224 CXX test/cpp_headers/env.o 00:05:25.224 CXX test/cpp_headers/fd.o 00:05:25.224 CXX test/cpp_headers/fd_group.o 00:05:25.224 CXX test/cpp_headers/fsdev.o 00:05:25.224 CXX test/cpp_headers/fsdev_module.o 00:05:25.224 CXX test/cpp_headers/file.o 00:05:25.224 CXX test/cpp_headers/fuse_dispatcher.o 00:05:25.224 CXX test/cpp_headers/ftl.o 00:05:25.224 CXX test/cpp_headers/gpt_spec.o 00:05:25.224 CXX test/cpp_headers/histogram_data.o 00:05:25.224 CXX test/cpp_headers/hexlify.o 00:05:25.224 CXX test/cpp_headers/idxd.o 00:05:25.224 CXX test/cpp_headers/init.o 00:05:25.224 CXX test/cpp_headers/idxd_spec.o 00:05:25.224 CXX test/cpp_headers/ioat_spec.o 00:05:25.224 CXX test/cpp_headers/ioat.o 00:05:25.224 CXX test/cpp_headers/iscsi_spec.o 00:05:25.224 CXX test/cpp_headers/json.o 00:05:25.224 CXX test/cpp_headers/jsonrpc.o 00:05:25.224 CXX test/cpp_headers/keyring_module.o 00:05:25.224 CXX test/cpp_headers/keyring.o 00:05:25.224 CXX test/cpp_headers/likely.o 00:05:25.224 CXX test/cpp_headers/log.o 00:05:25.224 CXX test/cpp_headers/lvol.o 00:05:25.224 CXX test/cpp_headers/md5.o 00:05:25.224 CXX test/cpp_headers/memory.o 00:05:25.224 CXX test/cpp_headers/mmio.o 00:05:25.224 CXX test/cpp_headers/nbd.o 00:05:25.224 CXX test/cpp_headers/nvme.o 00:05:25.224 CXX test/cpp_headers/net.o 00:05:25.224 CXX test/cpp_headers/notify.o 00:05:25.224 CXX test/cpp_headers/nvme_intel.o 00:05:25.224 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:25.224 CXX test/cpp_headers/nvme_ocssd.o 00:05:25.224 CXX test/cpp_headers/nvme_spec.o 00:05:25.224 CXX test/cpp_headers/nvme_zns.o 00:05:25.224 CXX test/cpp_headers/nvmf_cmd.o 00:05:25.224 CXX test/cpp_headers/nvmf.o 00:05:25.224 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:25.224 CXX test/cpp_headers/nvmf_transport.o 00:05:25.224 CXX test/cpp_headers/nvmf_spec.o 00:05:25.224 CC examples/util/zipf/zipf.o 00:05:25.224 CC test/thread/poller_perf/poller_perf.o 00:05:25.224 CC test/app/histogram_perf/histogram_perf.o 00:05:25.224 CXX test/cpp_headers/opal.o 00:05:25.224 CC test/app/jsoncat/jsoncat.o 00:05:25.224 CC examples/ioat/perf/perf.o 00:05:25.224 CC test/env/vtophys/vtophys.o 00:05:25.224 CC examples/ioat/verify/verify.o 00:05:25.495 CC test/dma/test_dma/test_dma.o 00:05:25.495 CC test/app/stub/stub.o 00:05:25.495 CC test/env/memory/memory_ut.o 00:05:25.495 CC test/app/bdev_svc/bdev_svc.o 00:05:25.495 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:25.495 CC test/env/pci/pci_ut.o 00:05:25.495 CC app/fio/bdev/fio_plugin.o 00:05:25.495 LINK spdk_lspci 00:05:25.495 CC app/fio/nvme/fio_plugin.o 00:05:25.761 LINK spdk_nvme_discover 00:05:25.761 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:25.761 CC test/env/mem_callbacks/mem_callbacks.o 00:05:25.761 LINK rpc_client_test 00:05:25.761 LINK jsoncat 00:05:25.761 LINK interrupt_tgt 00:05:25.761 LINK poller_perf 00:05:25.761 LINK zipf 00:05:25.761 LINK vtophys 00:05:25.761 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:25.761 LINK nvmf_tgt 00:05:25.761 LINK spdk_trace_record 00:05:25.761 CXX test/cpp_headers/opal_spec.o 00:05:25.761 CXX test/cpp_headers/pci_ids.o 00:05:25.761 CXX test/cpp_headers/pipe.o 00:05:25.761 CXX test/cpp_headers/queue.o 00:05:25.761 LINK spdk_tgt 00:05:25.761 CXX test/cpp_headers/reduce.o 00:05:25.761 CXX test/cpp_headers/rpc.o 00:05:25.761 CXX test/cpp_headers/scheduler.o 00:05:25.761 CXX test/cpp_headers/scsi_spec.o 00:05:25.761 CXX test/cpp_headers/scsi.o 00:05:25.761 CXX test/cpp_headers/stdinc.o 00:05:25.761 CXX test/cpp_headers/string.o 00:05:25.761 CXX test/cpp_headers/sock.o 00:05:26.022 CXX test/cpp_headers/thread.o 00:05:26.022 CXX test/cpp_headers/trace.o 00:05:26.022 CXX test/cpp_headers/trace_parser.o 00:05:26.022 CXX test/cpp_headers/ublk.o 00:05:26.022 CXX test/cpp_headers/util.o 00:05:26.022 CXX test/cpp_headers/tree.o 00:05:26.022 CXX test/cpp_headers/uuid.o 00:05:26.022 CXX test/cpp_headers/version.o 00:05:26.022 CXX test/cpp_headers/vfio_user_pci.o 00:05:26.022 CXX test/cpp_headers/vfio_user_spec.o 00:05:26.022 CXX test/cpp_headers/vhost.o 00:05:26.022 CXX test/cpp_headers/xor.o 00:05:26.022 CXX test/cpp_headers/vmd.o 00:05:26.022 CXX test/cpp_headers/zipf.o 00:05:26.022 LINK histogram_perf 00:05:26.022 LINK bdev_svc 00:05:26.022 LINK iscsi_tgt 00:05:26.022 LINK stub 00:05:26.022 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:26.022 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:26.022 LINK spdk_trace 00:05:26.022 LINK env_dpdk_post_init 00:05:26.022 LINK ioat_perf 00:05:26.022 LINK verify 00:05:26.281 LINK spdk_dd 00:05:26.281 LINK pci_ut 00:05:26.281 LINK test_dma 00:05:26.281 LINK spdk_bdev 00:05:26.281 CC test/event/reactor_perf/reactor_perf.o 00:05:26.281 CC examples/idxd/perf/perf.o 00:05:26.281 CC test/event/reactor/reactor.o 00:05:26.281 CC examples/sock/hello_world/hello_sock.o 00:05:26.281 CC examples/vmd/lsvmd/lsvmd.o 00:05:26.281 CC examples/vmd/led/led.o 00:05:26.281 CC test/event/event_perf/event_perf.o 00:05:26.281 CC test/event/app_repeat/app_repeat.o 00:05:26.281 CC examples/thread/thread/thread_ex.o 00:05:26.281 CC test/event/scheduler/scheduler.o 00:05:26.281 LINK nvme_fuzz 00:05:26.281 CC app/vhost/vhost.o 00:05:26.539 LINK spdk_nvme_identify 00:05:26.539 LINK spdk_nvme 00:05:26.539 LINK spdk_nvme_perf 00:05:26.539 LINK spdk_top 00:05:26.539 LINK reactor 00:05:26.539 LINK reactor_perf 00:05:26.539 LINK lsvmd 00:05:26.539 LINK event_perf 00:05:26.539 LINK led 00:05:26.539 LINK vhost_fuzz 00:05:26.539 LINK app_repeat 00:05:26.539 LINK mem_callbacks 00:05:26.539 LINK hello_sock 00:05:26.539 LINK vhost 00:05:26.539 LINK scheduler 00:05:26.539 LINK thread 00:05:26.539 LINK idxd_perf 00:05:26.799 CC test/nvme/e2edp/nvme_dp.o 00:05:26.799 CC test/nvme/compliance/nvme_compliance.o 00:05:26.799 CC test/nvme/fused_ordering/fused_ordering.o 00:05:26.799 CC test/nvme/connect_stress/connect_stress.o 00:05:26.799 CC test/nvme/startup/startup.o 00:05:26.799 CC test/nvme/err_injection/err_injection.o 00:05:26.799 CC test/nvme/overhead/overhead.o 00:05:26.799 CC test/nvme/reserve/reserve.o 00:05:26.799 CC test/nvme/fdp/fdp.o 00:05:26.799 CC test/nvme/aer/aer.o 00:05:26.799 CC test/nvme/reset/reset.o 00:05:26.799 CC test/nvme/simple_copy/simple_copy.o 00:05:26.799 CC test/nvme/cuse/cuse.o 00:05:26.799 CC test/nvme/sgl/sgl.o 00:05:26.799 CC test/nvme/boot_partition/boot_partition.o 00:05:26.799 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:26.799 CC test/blobfs/mkfs/mkfs.o 00:05:26.799 CC test/accel/dif/dif.o 00:05:27.058 CC test/lvol/esnap/esnap.o 00:05:27.058 LINK startup 00:05:27.058 LINK connect_stress 00:05:27.058 LINK fused_ordering 00:05:27.058 LINK boot_partition 00:05:27.058 LINK err_injection 00:05:27.058 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:27.058 LINK memory_ut 00:05:27.058 CC examples/nvme/hotplug/hotplug.o 00:05:27.058 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:27.058 LINK reserve 00:05:27.058 CC examples/nvme/reconnect/reconnect.o 00:05:27.058 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:27.058 LINK doorbell_aers 00:05:27.058 CC examples/nvme/abort/abort.o 00:05:27.058 CC examples/nvme/arbitration/arbitration.o 00:05:27.058 LINK mkfs 00:05:27.058 CC examples/nvme/hello_world/hello_world.o 00:05:27.058 LINK nvme_dp 00:05:27.058 LINK simple_copy 00:05:27.058 LINK overhead 00:05:27.058 LINK aer 00:05:27.058 LINK reset 00:05:27.058 LINK sgl 00:05:27.058 CC examples/accel/perf/accel_perf.o 00:05:27.058 LINK nvme_compliance 00:05:27.058 LINK fdp 00:05:27.058 CC examples/blob/hello_world/hello_blob.o 00:05:27.058 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:27.058 CC examples/blob/cli/blobcli.o 00:05:27.317 LINK pmr_persistence 00:05:27.317 LINK cmb_copy 00:05:27.317 LINK hello_world 00:05:27.317 LINK hotplug 00:05:27.317 LINK arbitration 00:05:27.317 LINK reconnect 00:05:27.317 LINK abort 00:05:27.317 LINK hello_blob 00:05:27.317 LINK dif 00:05:27.317 LINK hello_fsdev 00:05:27.317 LINK nvme_manage 00:05:27.317 LINK iscsi_fuzz 00:05:27.577 LINK accel_perf 00:05:27.577 LINK blobcli 00:05:27.835 LINK cuse 00:05:27.835 CC test/bdev/bdevio/bdevio.o 00:05:28.093 CC examples/bdev/hello_world/hello_bdev.o 00:05:28.093 CC examples/bdev/bdevperf/bdevperf.o 00:05:28.351 LINK hello_bdev 00:05:28.351 LINK bdevio 00:05:28.609 LINK bdevperf 00:05:29.176 CC examples/nvmf/nvmf/nvmf.o 00:05:29.435 LINK nvmf 00:05:30.371 LINK esnap 00:05:30.629 00:05:30.629 real 0m55.536s 00:05:30.629 user 8m18.771s 00:05:30.629 sys 3m45.985s 00:05:30.629 10:33:20 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:30.629 10:33:20 make -- common/autotest_common.sh@10 -- $ set +x 00:05:30.629 ************************************ 00:05:30.629 END TEST make 00:05:30.629 ************************************ 00:05:30.890 10:33:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:30.890 10:33:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:30.890 10:33:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:30.890 10:33:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:30.890 10:33:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:30.890 10:33:20 -- pm/common@44 -- $ pid=3653758 00:05:30.890 10:33:20 -- pm/common@50 -- $ kill -TERM 3653758 00:05:30.890 10:33:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:30.890 10:33:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:30.890 10:33:20 -- pm/common@44 -- $ pid=3653759 00:05:30.890 10:33:20 -- pm/common@50 -- $ kill -TERM 3653759 00:05:30.890 10:33:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:30.890 10:33:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:30.890 10:33:20 -- pm/common@44 -- $ pid=3653762 00:05:30.890 10:33:20 -- pm/common@50 -- $ kill -TERM 3653762 00:05:30.890 10:33:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:30.890 10:33:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:30.890 10:33:20 -- pm/common@44 -- $ pid=3653785 00:05:30.890 10:33:20 -- pm/common@50 -- $ sudo -E kill -TERM 3653785 00:05:30.890 10:33:20 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:30.890 10:33:20 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:30.890 10:33:20 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.890 10:33:20 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.890 10:33:20 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.890 10:33:20 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.890 10:33:20 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.890 10:33:20 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.890 10:33:20 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.890 10:33:20 -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.890 10:33:20 -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.890 10:33:20 -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.890 10:33:20 -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.890 10:33:20 -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.890 10:33:20 -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.890 10:33:20 -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.890 10:33:20 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.890 10:33:20 -- scripts/common.sh@344 -- # case "$op" in 00:05:30.890 10:33:20 -- scripts/common.sh@345 -- # : 1 00:05:30.890 10:33:20 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.890 10:33:20 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.890 10:33:20 -- scripts/common.sh@365 -- # decimal 1 00:05:30.890 10:33:20 -- scripts/common.sh@353 -- # local d=1 00:05:30.890 10:33:20 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.890 10:33:20 -- scripts/common.sh@355 -- # echo 1 00:05:30.890 10:33:20 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.890 10:33:20 -- scripts/common.sh@366 -- # decimal 2 00:05:30.890 10:33:20 -- scripts/common.sh@353 -- # local d=2 00:05:30.890 10:33:20 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.890 10:33:20 -- scripts/common.sh@355 -- # echo 2 00:05:30.890 10:33:20 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.890 10:33:20 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.890 10:33:20 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.890 10:33:20 -- scripts/common.sh@368 -- # return 0 00:05:30.890 10:33:20 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.890 10:33:20 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.890 --rc genhtml_branch_coverage=1 00:05:30.890 --rc genhtml_function_coverage=1 00:05:30.890 --rc genhtml_legend=1 00:05:30.890 --rc geninfo_all_blocks=1 00:05:30.890 --rc geninfo_unexecuted_blocks=1 00:05:30.890 00:05:30.890 ' 00:05:30.890 10:33:20 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.890 --rc genhtml_branch_coverage=1 00:05:30.890 --rc genhtml_function_coverage=1 00:05:30.890 --rc genhtml_legend=1 00:05:30.890 --rc geninfo_all_blocks=1 00:05:30.890 --rc geninfo_unexecuted_blocks=1 00:05:30.890 00:05:30.890 ' 00:05:30.890 10:33:20 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.890 --rc genhtml_branch_coverage=1 00:05:30.890 --rc genhtml_function_coverage=1 00:05:30.890 --rc genhtml_legend=1 00:05:30.890 --rc geninfo_all_blocks=1 00:05:30.890 --rc geninfo_unexecuted_blocks=1 00:05:30.890 00:05:30.890 ' 00:05:30.890 10:33:20 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.890 --rc genhtml_branch_coverage=1 00:05:30.890 --rc genhtml_function_coverage=1 00:05:30.890 --rc genhtml_legend=1 00:05:30.890 --rc geninfo_all_blocks=1 00:05:30.890 --rc geninfo_unexecuted_blocks=1 00:05:30.890 00:05:30.890 ' 00:05:30.890 10:33:20 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.890 10:33:20 -- nvmf/common.sh@7 -- # uname -s 00:05:30.890 10:33:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.890 10:33:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.890 10:33:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.890 10:33:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.890 10:33:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.890 10:33:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.890 10:33:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.890 10:33:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.890 10:33:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.890 10:33:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.890 10:33:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:30.890 10:33:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:30.890 10:33:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.890 10:33:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.890 10:33:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:30.890 10:33:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.890 10:33:20 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.890 10:33:20 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:30.890 10:33:20 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.890 10:33:20 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.890 10:33:20 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.890 10:33:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.891 10:33:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.891 10:33:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.891 10:33:20 -- paths/export.sh@5 -- # export PATH 00:05:30.891 10:33:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.891 10:33:20 -- nvmf/common.sh@51 -- # : 0 00:05:30.891 10:33:20 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:30.891 10:33:20 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:30.891 10:33:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.891 10:33:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.891 10:33:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.891 10:33:20 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:30.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:30.891 10:33:20 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:30.891 10:33:20 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:30.891 10:33:20 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:30.891 10:33:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:30.891 10:33:20 -- spdk/autotest.sh@32 -- # uname -s 00:05:31.150 10:33:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:31.150 10:33:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:31.150 10:33:20 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:31.150 10:33:20 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:31.150 10:33:20 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:31.150 10:33:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:31.150 10:33:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:31.150 10:33:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:31.150 10:33:20 -- spdk/autotest.sh@48 -- # udevadm_pid=3716721 00:05:31.150 10:33:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:31.150 10:33:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:31.150 10:33:20 -- pm/common@17 -- # local monitor 00:05:31.150 10:33:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:31.150 10:33:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:31.150 10:33:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:31.150 10:33:20 -- pm/common@21 -- # date +%s 00:05:31.150 10:33:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:31.150 10:33:20 -- pm/common@21 -- # date +%s 00:05:31.150 10:33:20 -- pm/common@25 -- # sleep 1 00:05:31.150 10:33:20 -- pm/common@21 -- # date +%s 00:05:31.150 10:33:20 -- pm/common@21 -- # date +%s 00:05:31.150 10:33:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008800 00:05:31.151 10:33:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008800 00:05:31.151 10:33:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008800 00:05:31.151 10:33:20 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008800 00:05:31.151 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008800_collect-cpu-load.pm.log 00:05:31.151 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008800_collect-vmstat.pm.log 00:05:31.151 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008800_collect-cpu-temp.pm.log 00:05:31.151 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008800_collect-bmc-pm.bmc.pm.log 00:05:32.088 10:33:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:32.088 10:33:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:32.088 10:33:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.088 10:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:32.088 10:33:21 -- spdk/autotest.sh@59 -- # create_test_list 00:05:32.088 10:33:21 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:32.088 10:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:32.088 10:33:21 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:32.088 10:33:21 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.088 10:33:21 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.088 10:33:21 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:32.088 10:33:21 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.088 10:33:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:32.088 10:33:21 -- common/autotest_common.sh@1457 -- # uname 00:05:32.088 10:33:21 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:32.088 10:33:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:32.088 10:33:21 -- common/autotest_common.sh@1477 -- # uname 00:05:32.088 10:33:21 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:32.088 10:33:21 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:32.088 10:33:21 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:32.088 lcov: LCOV version 1.15 00:05:32.088 10:33:21 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:50.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:50.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:58.298 10:33:46 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:58.298 10:33:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.298 10:33:46 -- common/autotest_common.sh@10 -- # set +x 00:05:58.298 10:33:46 -- spdk/autotest.sh@78 -- # rm -f 00:05:58.298 10:33:46 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:59.680 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:05:59.680 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:59.680 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:59.680 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:59.939 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:59.939 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:59.939 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:59.939 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:59.939 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:59.939 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:59.939 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:59.939 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:59.939 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:59.939 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:59.939 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:06:00.199 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:06:00.199 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:06:00.199 10:33:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:00.199 10:33:49 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:00.199 10:33:49 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:00.199 10:33:49 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:00.199 10:33:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:00.199 10:33:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:00.199 10:33:49 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:00.199 10:33:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:00.199 10:33:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:00.199 10:33:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:00.199 10:33:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:00.199 10:33:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:00.199 10:33:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:00.199 10:33:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:00.199 10:33:49 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:00.199 No valid GPT data, bailing 00:06:00.199 10:33:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:00.199 10:33:49 -- scripts/common.sh@394 -- # pt= 00:06:00.199 10:33:49 -- scripts/common.sh@395 -- # return 1 00:06:00.199 10:33:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:00.199 1+0 records in 00:06:00.199 1+0 records out 00:06:00.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0042647 s, 246 MB/s 00:06:00.199 10:33:49 -- spdk/autotest.sh@105 -- # sync 00:06:00.199 10:33:49 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:00.199 10:33:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:00.199 10:33:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:06.773 10:33:55 -- spdk/autotest.sh@111 -- # uname -s 00:06:06.773 10:33:55 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:06.773 10:33:55 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:06.773 10:33:55 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:08.684 Hugepages 00:06:08.684 node hugesize free / total 00:06:08.684 node0 1048576kB 0 / 0 00:06:08.684 node0 2048kB 0 / 0 00:06:08.684 node1 1048576kB 0 / 0 00:06:08.684 node1 2048kB 0 / 0 00:06:08.684 00:06:08.684 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:08.684 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:08.684 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:08.684 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:08.684 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:08.684 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:08.684 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:08.684 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:08.684 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:08.684 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:08.684 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:08.684 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:08.684 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:08.684 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:08.684 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:08.684 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:08.684 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:08.684 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:08.684 10:33:58 -- spdk/autotest.sh@117 -- # uname -s 00:06:08.684 10:33:58 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:08.684 10:33:58 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:08.684 10:33:58 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:11.984 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:11.984 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:13.361 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:13.361 10:34:02 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:14.298 10:34:03 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:14.298 10:34:03 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:14.298 10:34:03 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:14.298 10:34:03 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:14.298 10:34:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:14.298 10:34:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:14.298 10:34:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:14.298 10:34:03 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:14.298 10:34:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:14.298 10:34:03 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:14.298 10:34:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:06:14.298 10:34:03 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:17.590 Waiting for block devices as requested 00:06:17.590 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:06:17.590 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:17.590 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:17.590 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:17.590 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:17.590 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:17.590 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:17.590 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:17.851 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:17.851 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:17.851 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:17.851 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:18.111 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:18.111 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:18.111 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:18.370 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:18.370 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:18.370 10:34:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:18.370 10:34:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:06:18.370 10:34:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:18.370 10:34:08 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:06:18.370 10:34:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:18.370 10:34:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:06:18.370 10:34:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:18.370 10:34:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:18.370 10:34:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:18.370 10:34:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:18.370 10:34:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:18.370 10:34:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:18.370 10:34:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:18.370 10:34:08 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:06:18.370 10:34:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:18.370 10:34:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:18.370 10:34:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:18.370 10:34:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:18.370 10:34:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:18.370 10:34:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:18.370 10:34:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:18.370 10:34:08 -- common/autotest_common.sh@1543 -- # continue 00:06:18.370 10:34:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:18.370 10:34:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.370 10:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:18.628 10:34:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:18.628 10:34:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.628 10:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:18.629 10:34:08 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:21.930 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:21.930 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:22.869 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:22.869 10:34:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:22.869 10:34:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.869 10:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:22.869 10:34:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:22.869 10:34:12 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:22.869 10:34:12 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:22.869 10:34:12 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:22.869 10:34:12 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:22.869 10:34:12 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:22.869 10:34:12 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:22.869 10:34:12 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:22.869 10:34:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:22.869 10:34:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:22.869 10:34:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:22.869 10:34:12 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:22.869 10:34:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:23.128 10:34:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:23.128 10:34:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:06:23.128 10:34:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:23.128 10:34:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:06:23.128 10:34:12 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:23.128 10:34:12 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:23.128 10:34:12 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:23.128 10:34:12 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:06:23.128 10:34:12 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:06:23.128 10:34:12 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:06:23.128 10:34:12 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3730962 00:06:23.128 10:34:12 -- common/autotest_common.sh@1585 -- # waitforlisten 3730962 00:06:23.128 10:34:12 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.128 10:34:12 -- common/autotest_common.sh@835 -- # '[' -z 3730962 ']' 00:06:23.128 10:34:12 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.128 10:34:12 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.128 10:34:12 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.128 10:34:12 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.128 10:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:23.128 [2024-11-19 10:34:12.787132] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:06:23.128 [2024-11-19 10:34:12.787184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730962 ] 00:06:23.128 [2024-11-19 10:34:12.863895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.128 [2024-11-19 10:34:12.904235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.065 10:34:13 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.065 10:34:13 -- common/autotest_common.sh@868 -- # return 0 00:06:24.065 10:34:13 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:06:24.065 10:34:13 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:06:24.065 10:34:13 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:06:27.355 nvme0n1 00:06:27.355 10:34:16 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:27.355 [2024-11-19 10:34:16.788931] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:27.355 request: 00:06:27.355 { 00:06:27.355 "nvme_ctrlr_name": "nvme0", 00:06:27.355 "password": "test", 00:06:27.355 "method": "bdev_nvme_opal_revert", 00:06:27.355 "req_id": 1 00:06:27.355 } 00:06:27.355 Got JSON-RPC error response 00:06:27.355 response: 00:06:27.355 { 00:06:27.355 "code": -32602, 00:06:27.355 "message": "Invalid parameters" 00:06:27.355 } 00:06:27.355 10:34:16 -- common/autotest_common.sh@1591 -- # true 00:06:27.355 10:34:16 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:27.355 10:34:16 -- common/autotest_common.sh@1595 -- # killprocess 3730962 00:06:27.355 10:34:16 -- common/autotest_common.sh@954 -- # '[' -z 3730962 ']' 00:06:27.355 10:34:16 -- common/autotest_common.sh@958 -- # kill -0 3730962 00:06:27.355 10:34:16 -- common/autotest_common.sh@959 -- # uname 00:06:27.355 10:34:16 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.355 10:34:16 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3730962 00:06:27.355 10:34:16 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.355 10:34:16 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.355 10:34:16 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3730962' 00:06:27.355 killing process with pid 3730962 00:06:27.355 10:34:16 -- common/autotest_common.sh@973 -- # kill 3730962 00:06:27.355 10:34:16 -- common/autotest_common.sh@978 -- # wait 3730962 00:06:29.333 10:34:19 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:29.333 10:34:19 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:29.333 10:34:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:29.333 10:34:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:29.333 10:34:19 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:29.333 10:34:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.333 10:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:29.333 10:34:19 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:29.333 10:34:19 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:29.333 10:34:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.333 10:34:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.333 10:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:29.333 ************************************ 00:06:29.333 START TEST env 00:06:29.333 ************************************ 00:06:29.333 10:34:19 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:29.594 * Looking for test storage... 00:06:29.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:29.594 10:34:19 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.594 10:34:19 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.594 10:34:19 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.594 10:34:19 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.594 10:34:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.594 10:34:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.594 10:34:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.594 10:34:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.594 10:34:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.594 10:34:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.594 10:34:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.594 10:34:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.594 10:34:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.594 10:34:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.594 10:34:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.594 10:34:19 env -- scripts/common.sh@344 -- # case "$op" in 00:06:29.594 10:34:19 env -- scripts/common.sh@345 -- # : 1 00:06:29.594 10:34:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.594 10:34:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.594 10:34:19 env -- scripts/common.sh@365 -- # decimal 1 00:06:29.594 10:34:19 env -- scripts/common.sh@353 -- # local d=1 00:06:29.594 10:34:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.594 10:34:19 env -- scripts/common.sh@355 -- # echo 1 00:06:29.594 10:34:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.594 10:34:19 env -- scripts/common.sh@366 -- # decimal 2 00:06:29.594 10:34:19 env -- scripts/common.sh@353 -- # local d=2 00:06:29.594 10:34:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.594 10:34:19 env -- scripts/common.sh@355 -- # echo 2 00:06:29.594 10:34:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.594 10:34:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.594 10:34:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.594 10:34:19 env -- scripts/common.sh@368 -- # return 0 00:06:29.594 10:34:19 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.594 10:34:19 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.594 --rc genhtml_branch_coverage=1 00:06:29.594 --rc genhtml_function_coverage=1 00:06:29.594 --rc genhtml_legend=1 00:06:29.594 --rc geninfo_all_blocks=1 00:06:29.594 --rc geninfo_unexecuted_blocks=1 00:06:29.594 00:06:29.594 ' 00:06:29.594 10:34:19 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.594 --rc genhtml_branch_coverage=1 00:06:29.594 --rc genhtml_function_coverage=1 00:06:29.594 --rc genhtml_legend=1 00:06:29.594 --rc geninfo_all_blocks=1 00:06:29.594 --rc geninfo_unexecuted_blocks=1 00:06:29.594 00:06:29.594 ' 00:06:29.594 10:34:19 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.594 --rc genhtml_branch_coverage=1 00:06:29.594 --rc genhtml_function_coverage=1 00:06:29.594 --rc genhtml_legend=1 00:06:29.594 --rc geninfo_all_blocks=1 00:06:29.594 --rc geninfo_unexecuted_blocks=1 00:06:29.594 00:06:29.594 ' 00:06:29.594 10:34:19 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.594 --rc genhtml_branch_coverage=1 00:06:29.594 --rc genhtml_function_coverage=1 00:06:29.594 --rc genhtml_legend=1 00:06:29.594 --rc geninfo_all_blocks=1 00:06:29.594 --rc geninfo_unexecuted_blocks=1 00:06:29.594 00:06:29.594 ' 00:06:29.594 10:34:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:29.594 10:34:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.594 10:34:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.594 10:34:19 env -- common/autotest_common.sh@10 -- # set +x 00:06:29.594 ************************************ 00:06:29.594 START TEST env_memory 00:06:29.594 ************************************ 00:06:29.594 10:34:19 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:29.594 00:06:29.594 00:06:29.594 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.594 http://cunit.sourceforge.net/ 00:06:29.594 00:06:29.594 00:06:29.594 Suite: memory 00:06:29.594 Test: alloc and free memory map ...[2024-11-19 10:34:19.310253] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:29.594 passed 00:06:29.594 Test: mem map translation ...[2024-11-19 10:34:19.328134] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:29.594 [2024-11-19 10:34:19.328146] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:29.594 [2024-11-19 10:34:19.328179] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:29.594 [2024-11-19 10:34:19.328185] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:29.594 passed 00:06:29.594 Test: mem map registration ...[2024-11-19 10:34:19.364907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:29.594 [2024-11-19 10:34:19.364921] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:29.594 passed 00:06:29.853 Test: mem map adjacent registrations ...passed 00:06:29.853 00:06:29.853 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.853 suites 1 1 n/a 0 0 00:06:29.853 tests 4 4 4 0 0 00:06:29.853 asserts 152 152 152 0 n/a 00:06:29.853 00:06:29.853 Elapsed time = 0.131 seconds 00:06:29.853 00:06:29.853 real 0m0.139s 00:06:29.853 user 0m0.133s 00:06:29.853 sys 0m0.006s 00:06:29.853 10:34:19 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.853 10:34:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:29.853 ************************************ 00:06:29.853 END TEST env_memory 00:06:29.853 ************************************ 00:06:29.853 10:34:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:29.853 10:34:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.853 10:34:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.853 10:34:19 env -- common/autotest_common.sh@10 -- # set +x 00:06:29.853 ************************************ 00:06:29.853 START TEST env_vtophys 00:06:29.853 ************************************ 00:06:29.853 10:34:19 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:29.853 EAL: lib.eal log level changed from notice to debug 00:06:29.853 EAL: Detected lcore 0 as core 0 on socket 0 00:06:29.853 EAL: Detected lcore 1 as core 1 on socket 0 00:06:29.853 EAL: Detected lcore 2 as core 2 on socket 0 00:06:29.853 EAL: Detected lcore 3 as core 3 on socket 0 00:06:29.853 EAL: Detected lcore 4 as core 4 on socket 0 00:06:29.853 EAL: Detected lcore 5 as core 5 on socket 0 00:06:29.853 EAL: Detected lcore 6 as core 6 on socket 0 00:06:29.853 EAL: Detected lcore 7 as core 8 on socket 0 00:06:29.853 EAL: Detected lcore 8 as core 9 on socket 0 00:06:29.853 EAL: Detected lcore 9 as core 10 on socket 0 00:06:29.853 EAL: Detected lcore 10 as core 11 on socket 0 00:06:29.853 EAL: Detected lcore 11 as core 12 on socket 0 00:06:29.854 EAL: Detected lcore 12 as core 13 on socket 0 00:06:29.854 EAL: Detected lcore 13 as core 16 on socket 0 00:06:29.854 EAL: Detected lcore 14 as core 17 on socket 0 00:06:29.854 EAL: Detected lcore 15 as core 18 on socket 0 00:06:29.854 EAL: Detected lcore 16 as core 19 on socket 0 00:06:29.854 EAL: Detected lcore 17 as core 20 on socket 0 00:06:29.854 EAL: Detected lcore 18 as core 21 on socket 0 00:06:29.854 EAL: Detected lcore 19 as core 25 on socket 0 00:06:29.854 EAL: Detected lcore 20 as core 26 on socket 0 00:06:29.854 EAL: Detected lcore 21 as core 27 on socket 0 00:06:29.854 EAL: Detected lcore 22 as core 28 on socket 0 00:06:29.854 EAL: Detected lcore 23 as core 29 on socket 0 00:06:29.854 EAL: Detected lcore 24 as core 0 on socket 1 00:06:29.854 EAL: Detected lcore 25 as core 1 on socket 1 00:06:29.854 EAL: Detected lcore 26 as core 2 on socket 1 00:06:29.854 EAL: Detected lcore 27 as core 3 on socket 1 00:06:29.854 EAL: Detected lcore 28 as core 4 on socket 1 00:06:29.854 EAL: Detected lcore 29 as core 5 on socket 1 00:06:29.854 EAL: Detected lcore 30 as core 6 on socket 1 00:06:29.854 EAL: Detected lcore 31 as core 8 on socket 1 00:06:29.854 EAL: Detected lcore 32 as core 10 on socket 1 00:06:29.854 EAL: Detected lcore 33 as core 11 on socket 1 00:06:29.854 EAL: Detected lcore 34 as core 12 on socket 1 00:06:29.854 EAL: Detected lcore 35 as core 13 on socket 1 00:06:29.854 EAL: Detected lcore 36 as core 16 on socket 1 00:06:29.854 EAL: Detected lcore 37 as core 17 on socket 1 00:06:29.854 EAL: Detected lcore 38 as core 18 on socket 1 00:06:29.854 EAL: Detected lcore 39 as core 19 on socket 1 00:06:29.854 EAL: Detected lcore 40 as core 20 on socket 1 00:06:29.854 EAL: Detected lcore 41 as core 21 on socket 1 00:06:29.854 EAL: Detected lcore 42 as core 24 on socket 1 00:06:29.854 EAL: Detected lcore 43 as core 25 on socket 1 00:06:29.854 EAL: Detected lcore 44 as core 26 on socket 1 00:06:29.854 EAL: Detected lcore 45 as core 27 on socket 1 00:06:29.854 EAL: Detected lcore 46 as core 28 on socket 1 00:06:29.854 EAL: Detected lcore 47 as core 29 on socket 1 00:06:29.854 EAL: Detected lcore 48 as core 0 on socket 0 00:06:29.854 EAL: Detected lcore 49 as core 1 on socket 0 00:06:29.854 EAL: Detected lcore 50 as core 2 on socket 0 00:06:29.854 EAL: Detected lcore 51 as core 3 on socket 0 00:06:29.854 EAL: Detected lcore 52 as core 4 on socket 0 00:06:29.854 EAL: Detected lcore 53 as core 5 on socket 0 00:06:29.854 EAL: Detected lcore 54 as core 6 on socket 0 00:06:29.854 EAL: Detected lcore 55 as core 8 on socket 0 00:06:29.854 EAL: Detected lcore 56 as core 9 on socket 0 00:06:29.854 EAL: Detected lcore 57 as core 10 on socket 0 00:06:29.854 EAL: Detected lcore 58 as core 11 on socket 0 00:06:29.854 EAL: Detected lcore 59 as core 12 on socket 0 00:06:29.854 EAL: Detected lcore 60 as core 13 on socket 0 00:06:29.854 EAL: Detected lcore 61 as core 16 on socket 0 00:06:29.854 EAL: Detected lcore 62 as core 17 on socket 0 00:06:29.854 EAL: Detected lcore 63 as core 18 on socket 0 00:06:29.854 EAL: Detected lcore 64 as core 19 on socket 0 00:06:29.854 EAL: Detected lcore 65 as core 20 on socket 0 00:06:29.854 EAL: Detected lcore 66 as core 21 on socket 0 00:06:29.854 EAL: Detected lcore 67 as core 25 on socket 0 00:06:29.854 EAL: Detected lcore 68 as core 26 on socket 0 00:06:29.854 EAL: Detected lcore 69 as core 27 on socket 0 00:06:29.854 EAL: Detected lcore 70 as core 28 on socket 0 00:06:29.854 EAL: Detected lcore 71 as core 29 on socket 0 00:06:29.854 EAL: Detected lcore 72 as core 0 on socket 1 00:06:29.854 EAL: Detected lcore 73 as core 1 on socket 1 00:06:29.854 EAL: Detected lcore 74 as core 2 on socket 1 00:06:29.854 EAL: Detected lcore 75 as core 3 on socket 1 00:06:29.854 EAL: Detected lcore 76 as core 4 on socket 1 00:06:29.854 EAL: Detected lcore 77 as core 5 on socket 1 00:06:29.854 EAL: Detected lcore 78 as core 6 on socket 1 00:06:29.854 EAL: Detected lcore 79 as core 8 on socket 1 00:06:29.854 EAL: Detected lcore 80 as core 10 on socket 1 00:06:29.854 EAL: Detected lcore 81 as core 11 on socket 1 00:06:29.854 EAL: Detected lcore 82 as core 12 on socket 1 00:06:29.854 EAL: Detected lcore 83 as core 13 on socket 1 00:06:29.854 EAL: Detected lcore 84 as core 16 on socket 1 00:06:29.854 EAL: Detected lcore 85 as core 17 on socket 1 00:06:29.854 EAL: Detected lcore 86 as core 18 on socket 1 00:06:29.854 EAL: Detected lcore 87 as core 19 on socket 1 00:06:29.854 EAL: Detected lcore 88 as core 20 on socket 1 00:06:29.854 EAL: Detected lcore 89 as core 21 on socket 1 00:06:29.854 EAL: Detected lcore 90 as core 24 on socket 1 00:06:29.854 EAL: Detected lcore 91 as core 25 on socket 1 00:06:29.854 EAL: Detected lcore 92 as core 26 on socket 1 00:06:29.854 EAL: Detected lcore 93 as core 27 on socket 1 00:06:29.854 EAL: Detected lcore 94 as core 28 on socket 1 00:06:29.854 EAL: Detected lcore 95 as core 29 on socket 1 00:06:29.854 EAL: Maximum logical cores by configuration: 128 00:06:29.854 EAL: Detected CPU lcores: 96 00:06:29.854 EAL: Detected NUMA nodes: 2 00:06:29.854 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:29.854 EAL: Detected shared linkage of DPDK 00:06:29.854 EAL: No shared files mode enabled, IPC will be disabled 00:06:29.854 EAL: Bus pci wants IOVA as 'DC' 00:06:29.854 EAL: Buses did not request a specific IOVA mode. 00:06:29.854 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:29.854 EAL: Selected IOVA mode 'VA' 00:06:29.854 EAL: Probing VFIO support... 00:06:29.854 EAL: IOMMU type 1 (Type 1) is supported 00:06:29.854 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:29.854 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:29.854 EAL: VFIO support initialized 00:06:29.854 EAL: Ask a virtual area of 0x2e000 bytes 00:06:29.854 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:29.854 EAL: Setting up physically contiguous memory... 00:06:29.854 EAL: Setting maximum number of open files to 524288 00:06:29.854 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:29.854 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:29.854 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:29.854 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.854 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:29.854 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.854 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.854 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:29.854 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:29.854 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.854 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:29.854 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.854 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.854 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:29.854 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:29.854 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.854 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:29.854 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.854 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.854 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:29.854 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:29.854 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.854 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:29.854 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.854 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.854 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:29.854 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:29.854 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:29.854 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.854 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:29.854 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:29.854 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.854 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:29.854 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:29.854 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.854 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:29.854 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:29.854 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.854 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:29.854 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:29.854 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.854 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:29.854 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:29.854 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.854 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:29.854 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:29.854 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.854 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:29.854 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:29.854 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.854 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:29.854 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:29.854 EAL: Hugepages will be freed exactly as allocated. 00:06:29.854 EAL: No shared files mode enabled, IPC is disabled 00:06:29.854 EAL: No shared files mode enabled, IPC is disabled 00:06:29.854 EAL: TSC frequency is ~2100000 KHz 00:06:29.854 EAL: Main lcore 0 is ready (tid=7f033748fa00;cpuset=[0]) 00:06:29.854 EAL: Trying to obtain current memory policy. 00:06:29.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.854 EAL: Restoring previous memory policy: 0 00:06:29.854 EAL: request: mp_malloc_sync 00:06:29.854 EAL: No shared files mode enabled, IPC is disabled 00:06:29.854 EAL: Heap on socket 0 was expanded by 2MB 00:06:29.854 EAL: No shared files mode enabled, IPC is disabled 00:06:29.854 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:29.854 EAL: Mem event callback 'spdk:(nil)' registered 00:06:29.854 00:06:29.854 00:06:29.855 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.855 http://cunit.sourceforge.net/ 00:06:29.855 00:06:29.855 00:06:29.855 Suite: components_suite 00:06:29.855 Test: vtophys_malloc_test ...passed 00:06:29.855 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:29.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.855 EAL: Restoring previous memory policy: 4 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was expanded by 4MB 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was shrunk by 4MB 00:06:29.855 EAL: Trying to obtain current memory policy. 00:06:29.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.855 EAL: Restoring previous memory policy: 4 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was expanded by 6MB 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was shrunk by 6MB 00:06:29.855 EAL: Trying to obtain current memory policy. 00:06:29.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.855 EAL: Restoring previous memory policy: 4 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was expanded by 10MB 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was shrunk by 10MB 00:06:29.855 EAL: Trying to obtain current memory policy. 00:06:29.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.855 EAL: Restoring previous memory policy: 4 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was expanded by 18MB 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was shrunk by 18MB 00:06:29.855 EAL: Trying to obtain current memory policy. 00:06:29.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.855 EAL: Restoring previous memory policy: 4 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was expanded by 34MB 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was shrunk by 34MB 00:06:29.855 EAL: Trying to obtain current memory policy. 00:06:29.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.855 EAL: Restoring previous memory policy: 4 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was expanded by 66MB 00:06:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.855 EAL: request: mp_malloc_sync 00:06:29.855 EAL: No shared files mode enabled, IPC is disabled 00:06:29.855 EAL: Heap on socket 0 was shrunk by 66MB 00:06:29.855 EAL: Trying to obtain current memory policy. 00:06:29.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.114 EAL: Restoring previous memory policy: 4 00:06:30.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.114 EAL: request: mp_malloc_sync 00:06:30.114 EAL: No shared files mode enabled, IPC is disabled 00:06:30.114 EAL: Heap on socket 0 was expanded by 130MB 00:06:30.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.114 EAL: request: mp_malloc_sync 00:06:30.114 EAL: No shared files mode enabled, IPC is disabled 00:06:30.114 EAL: Heap on socket 0 was shrunk by 130MB 00:06:30.114 EAL: Trying to obtain current memory policy. 00:06:30.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.114 EAL: Restoring previous memory policy: 4 00:06:30.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.114 EAL: request: mp_malloc_sync 00:06:30.114 EAL: No shared files mode enabled, IPC is disabled 00:06:30.114 EAL: Heap on socket 0 was expanded by 258MB 00:06:30.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.114 EAL: request: mp_malloc_sync 00:06:30.114 EAL: No shared files mode enabled, IPC is disabled 00:06:30.114 EAL: Heap on socket 0 was shrunk by 258MB 00:06:30.114 EAL: Trying to obtain current memory policy. 00:06:30.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.374 EAL: Restoring previous memory policy: 4 00:06:30.374 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.374 EAL: request: mp_malloc_sync 00:06:30.374 EAL: No shared files mode enabled, IPC is disabled 00:06:30.374 EAL: Heap on socket 0 was expanded by 514MB 00:06:30.374 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.374 EAL: request: mp_malloc_sync 00:06:30.374 EAL: No shared files mode enabled, IPC is disabled 00:06:30.374 EAL: Heap on socket 0 was shrunk by 514MB 00:06:30.374 EAL: Trying to obtain current memory policy. 00:06:30.374 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.632 EAL: Restoring previous memory policy: 4 00:06:30.632 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.632 EAL: request: mp_malloc_sync 00:06:30.632 EAL: No shared files mode enabled, IPC is disabled 00:06:30.632 EAL: Heap on socket 0 was expanded by 1026MB 00:06:30.632 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.891 EAL: request: mp_malloc_sync 00:06:30.891 EAL: No shared files mode enabled, IPC is disabled 00:06:30.891 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:30.891 passed 00:06:30.891 00:06:30.891 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.891 suites 1 1 n/a 0 0 00:06:30.891 tests 2 2 2 0 0 00:06:30.891 asserts 497 497 497 0 n/a 00:06:30.891 00:06:30.891 Elapsed time = 0.972 seconds 00:06:30.891 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.891 EAL: request: mp_malloc_sync 00:06:30.891 EAL: No shared files mode enabled, IPC is disabled 00:06:30.891 EAL: Heap on socket 0 was shrunk by 2MB 00:06:30.891 EAL: No shared files mode enabled, IPC is disabled 00:06:30.891 EAL: No shared files mode enabled, IPC is disabled 00:06:30.891 EAL: No shared files mode enabled, IPC is disabled 00:06:30.891 00:06:30.891 real 0m1.097s 00:06:30.891 user 0m0.652s 00:06:30.891 sys 0m0.420s 00:06:30.891 10:34:20 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.891 10:34:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:30.891 ************************************ 00:06:30.891 END TEST env_vtophys 00:06:30.891 ************************************ 00:06:30.891 10:34:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:30.891 10:34:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.891 10:34:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.891 10:34:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:30.891 ************************************ 00:06:30.891 START TEST env_pci 00:06:30.891 ************************************ 00:06:30.891 10:34:20 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:30.891 00:06:30.891 00:06:30.891 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.891 http://cunit.sourceforge.net/ 00:06:30.891 00:06:30.891 00:06:30.891 Suite: pci 00:06:30.891 Test: pci_hook ...[2024-11-19 10:34:20.659787] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3732300 has claimed it 00:06:31.150 EAL: Cannot find device (10000:00:01.0) 00:06:31.150 EAL: Failed to attach device on primary process 00:06:31.150 passed 00:06:31.150 00:06:31.150 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.150 suites 1 1 n/a 0 0 00:06:31.150 tests 1 1 1 0 0 00:06:31.150 asserts 25 25 25 0 n/a 00:06:31.150 00:06:31.150 Elapsed time = 0.024 seconds 00:06:31.150 00:06:31.150 real 0m0.040s 00:06:31.150 user 0m0.011s 00:06:31.150 sys 0m0.029s 00:06:31.150 10:34:20 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.150 10:34:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:31.150 ************************************ 00:06:31.150 END TEST env_pci 00:06:31.150 ************************************ 00:06:31.150 10:34:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:31.150 10:34:20 env -- env/env.sh@15 -- # uname 00:06:31.150 10:34:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:31.150 10:34:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:31.150 10:34:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:31.150 10:34:20 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:31.150 10:34:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.150 10:34:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:31.150 ************************************ 00:06:31.150 START TEST env_dpdk_post_init 00:06:31.150 ************************************ 00:06:31.150 10:34:20 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:31.150 EAL: Detected CPU lcores: 96 00:06:31.150 EAL: Detected NUMA nodes: 2 00:06:31.150 EAL: Detected shared linkage of DPDK 00:06:31.150 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:31.150 EAL: Selected IOVA mode 'VA' 00:06:31.150 EAL: VFIO support initialized 00:06:31.150 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:31.150 EAL: Using IOMMU type 1 (Type 1) 00:06:31.150 EAL: Ignore mapping IO port bar(1) 00:06:31.150 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:31.150 EAL: Ignore mapping IO port bar(1) 00:06:31.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:31.151 EAL: Ignore mapping IO port bar(1) 00:06:31.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:31.151 EAL: Ignore mapping IO port bar(1) 00:06:31.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:31.409 EAL: Ignore mapping IO port bar(1) 00:06:31.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:31.409 EAL: Ignore mapping IO port bar(1) 00:06:31.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:31.409 EAL: Ignore mapping IO port bar(1) 00:06:31.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:31.409 EAL: Ignore mapping IO port bar(1) 00:06:31.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:31.978 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:06:31.978 EAL: Ignore mapping IO port bar(1) 00:06:31.978 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:31.978 EAL: Ignore mapping IO port bar(1) 00:06:31.978 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:31.978 EAL: Ignore mapping IO port bar(1) 00:06:31.978 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:32.237 EAL: Ignore mapping IO port bar(1) 00:06:32.237 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:32.237 EAL: Ignore mapping IO port bar(1) 00:06:32.237 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:32.237 EAL: Ignore mapping IO port bar(1) 00:06:32.237 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:32.237 EAL: Ignore mapping IO port bar(1) 00:06:32.237 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:32.237 EAL: Ignore mapping IO port bar(1) 00:06:32.237 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:35.527 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:06:35.527 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:06:36.096 Starting DPDK initialization... 00:06:36.096 Starting SPDK post initialization... 00:06:36.096 SPDK NVMe probe 00:06:36.096 Attaching to 0000:5e:00.0 00:06:36.096 Attached to 0000:5e:00.0 00:06:36.096 Cleaning up... 00:06:36.096 00:06:36.096 real 0m4.841s 00:06:36.096 user 0m3.415s 00:06:36.096 sys 0m0.500s 00:06:36.096 10:34:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.096 10:34:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:36.096 ************************************ 00:06:36.096 END TEST env_dpdk_post_init 00:06:36.096 ************************************ 00:06:36.096 10:34:25 env -- env/env.sh@26 -- # uname 00:06:36.096 10:34:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:36.096 10:34:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:36.096 10:34:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.096 10:34:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.096 10:34:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:36.096 ************************************ 00:06:36.096 START TEST env_mem_callbacks 00:06:36.096 ************************************ 00:06:36.096 10:34:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:36.096 EAL: Detected CPU lcores: 96 00:06:36.096 EAL: Detected NUMA nodes: 2 00:06:36.096 EAL: Detected shared linkage of DPDK 00:06:36.096 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:36.096 EAL: Selected IOVA mode 'VA' 00:06:36.096 EAL: VFIO support initialized 00:06:36.096 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:36.096 00:06:36.096 00:06:36.096 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.096 http://cunit.sourceforge.net/ 00:06:36.096 00:06:36.096 00:06:36.096 Suite: memory 00:06:36.096 Test: test ... 00:06:36.096 register 0x200000200000 2097152 00:06:36.096 malloc 3145728 00:06:36.096 register 0x200000400000 4194304 00:06:36.096 buf 0x200000500000 len 3145728 PASSED 00:06:36.096 malloc 64 00:06:36.096 buf 0x2000004fff40 len 64 PASSED 00:06:36.096 malloc 4194304 00:06:36.096 register 0x200000800000 6291456 00:06:36.096 buf 0x200000a00000 len 4194304 PASSED 00:06:36.096 free 0x200000500000 3145728 00:06:36.096 free 0x2000004fff40 64 00:06:36.096 unregister 0x200000400000 4194304 PASSED 00:06:36.096 free 0x200000a00000 4194304 00:06:36.096 unregister 0x200000800000 6291456 PASSED 00:06:36.096 malloc 8388608 00:06:36.096 register 0x200000400000 10485760 00:06:36.096 buf 0x200000600000 len 8388608 PASSED 00:06:36.096 free 0x200000600000 8388608 00:06:36.096 unregister 0x200000400000 10485760 PASSED 00:06:36.096 passed 00:06:36.096 00:06:36.096 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.096 suites 1 1 n/a 0 0 00:06:36.096 tests 1 1 1 0 0 00:06:36.096 asserts 15 15 15 0 n/a 00:06:36.096 00:06:36.096 Elapsed time = 0.007 seconds 00:06:36.096 00:06:36.096 real 0m0.056s 00:06:36.096 user 0m0.021s 00:06:36.096 sys 0m0.035s 00:06:36.096 10:34:25 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.096 10:34:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:36.096 ************************************ 00:06:36.096 END TEST env_mem_callbacks 00:06:36.096 ************************************ 00:06:36.096 00:06:36.096 real 0m6.695s 00:06:36.096 user 0m4.473s 00:06:36.096 sys 0m1.308s 00:06:36.096 10:34:25 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.096 10:34:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:36.096 ************************************ 00:06:36.096 END TEST env 00:06:36.096 ************************************ 00:06:36.096 10:34:25 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:36.096 10:34:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.096 10:34:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.096 10:34:25 -- common/autotest_common.sh@10 -- # set +x 00:06:36.096 ************************************ 00:06:36.096 START TEST rpc 00:06:36.096 ************************************ 00:06:36.096 10:34:25 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:36.356 * Looking for test storage... 00:06:36.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:36.356 10:34:25 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.356 10:34:25 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.356 10:34:25 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.356 10:34:25 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.356 10:34:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.356 10:34:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.356 10:34:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.356 10:34:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.356 10:34:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.356 10:34:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.356 10:34:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.356 10:34:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.356 10:34:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.356 10:34:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.356 10:34:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.356 10:34:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:36.356 10:34:25 rpc -- scripts/common.sh@345 -- # : 1 00:06:36.356 10:34:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.356 10:34:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.356 10:34:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:36.356 10:34:26 rpc -- scripts/common.sh@353 -- # local d=1 00:06:36.356 10:34:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.356 10:34:26 rpc -- scripts/common.sh@355 -- # echo 1 00:06:36.356 10:34:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.356 10:34:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:36.356 10:34:26 rpc -- scripts/common.sh@353 -- # local d=2 00:06:36.356 10:34:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.356 10:34:26 rpc -- scripts/common.sh@355 -- # echo 2 00:06:36.356 10:34:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.356 10:34:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.356 10:34:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.356 10:34:26 rpc -- scripts/common.sh@368 -- # return 0 00:06:36.356 10:34:26 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.356 10:34:26 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.356 --rc genhtml_branch_coverage=1 00:06:36.356 --rc genhtml_function_coverage=1 00:06:36.356 --rc genhtml_legend=1 00:06:36.356 --rc geninfo_all_blocks=1 00:06:36.356 --rc geninfo_unexecuted_blocks=1 00:06:36.356 00:06:36.356 ' 00:06:36.356 10:34:26 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.356 --rc genhtml_branch_coverage=1 00:06:36.356 --rc genhtml_function_coverage=1 00:06:36.356 --rc genhtml_legend=1 00:06:36.356 --rc geninfo_all_blocks=1 00:06:36.356 --rc geninfo_unexecuted_blocks=1 00:06:36.356 00:06:36.356 ' 00:06:36.356 10:34:26 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.356 --rc genhtml_branch_coverage=1 00:06:36.356 --rc genhtml_function_coverage=1 00:06:36.356 --rc genhtml_legend=1 00:06:36.356 --rc geninfo_all_blocks=1 00:06:36.356 --rc geninfo_unexecuted_blocks=1 00:06:36.356 00:06:36.356 ' 00:06:36.356 10:34:26 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.356 --rc genhtml_branch_coverage=1 00:06:36.356 --rc genhtml_function_coverage=1 00:06:36.356 --rc genhtml_legend=1 00:06:36.356 --rc geninfo_all_blocks=1 00:06:36.356 --rc geninfo_unexecuted_blocks=1 00:06:36.356 00:06:36.356 ' 00:06:36.356 10:34:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3733350 00:06:36.356 10:34:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.356 10:34:26 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:36.356 10:34:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3733350 00:06:36.356 10:34:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 3733350 ']' 00:06:36.356 10:34:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.356 10:34:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.356 10:34:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.356 10:34:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.356 10:34:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.356 [2024-11-19 10:34:26.064521] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:06:36.357 [2024-11-19 10:34:26.064570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733350 ] 00:06:36.357 [2024-11-19 10:34:26.138018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.616 [2024-11-19 10:34:26.177649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:36.616 [2024-11-19 10:34:26.177684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3733350' to capture a snapshot of events at runtime. 00:06:36.616 [2024-11-19 10:34:26.177691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.616 [2024-11-19 10:34:26.177699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.616 [2024-11-19 10:34:26.177704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3733350 for offline analysis/debug. 00:06:36.616 [2024-11-19 10:34:26.178256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.616 10:34:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.616 10:34:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:36.616 10:34:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:36.616 10:34:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:36.616 10:34:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:36.616 10:34:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:36.876 10:34:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.876 10:34:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.876 10:34:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 ************************************ 00:06:36.876 START TEST rpc_integrity 00:06:36.876 ************************************ 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:36.876 { 00:06:36.876 "name": "Malloc0", 00:06:36.876 "aliases": [ 00:06:36.876 "10ffd0c5-6a9f-4cb4-9b06-6ce365bacf98" 00:06:36.876 ], 00:06:36.876 "product_name": "Malloc disk", 00:06:36.876 "block_size": 512, 00:06:36.876 "num_blocks": 16384, 00:06:36.876 "uuid": "10ffd0c5-6a9f-4cb4-9b06-6ce365bacf98", 00:06:36.876 "assigned_rate_limits": { 00:06:36.876 "rw_ios_per_sec": 0, 00:06:36.876 "rw_mbytes_per_sec": 0, 00:06:36.876 "r_mbytes_per_sec": 0, 00:06:36.876 "w_mbytes_per_sec": 0 00:06:36.876 }, 00:06:36.876 "claimed": false, 00:06:36.876 "zoned": false, 00:06:36.876 "supported_io_types": { 00:06:36.876 "read": true, 00:06:36.876 "write": true, 00:06:36.876 "unmap": true, 00:06:36.876 "flush": true, 00:06:36.876 "reset": true, 00:06:36.876 "nvme_admin": false, 00:06:36.876 "nvme_io": false, 00:06:36.876 "nvme_io_md": false, 00:06:36.876 "write_zeroes": true, 00:06:36.876 "zcopy": true, 00:06:36.876 "get_zone_info": false, 00:06:36.876 "zone_management": false, 00:06:36.876 "zone_append": false, 00:06:36.876 "compare": false, 00:06:36.876 "compare_and_write": false, 00:06:36.876 "abort": true, 00:06:36.876 "seek_hole": false, 00:06:36.876 "seek_data": false, 00:06:36.876 "copy": true, 00:06:36.876 "nvme_iov_md": false 00:06:36.876 }, 00:06:36.876 "memory_domains": [ 00:06:36.876 { 00:06:36.876 "dma_device_id": "system", 00:06:36.876 "dma_device_type": 1 00:06:36.876 }, 00:06:36.876 { 00:06:36.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.876 "dma_device_type": 2 00:06:36.876 } 00:06:36.876 ], 00:06:36.876 "driver_specific": {} 00:06:36.876 } 00:06:36.876 ]' 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 [2024-11-19 10:34:26.563695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:36.876 [2024-11-19 10:34:26.563725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.876 [2024-11-19 10:34:26.563738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8646e0 00:06:36.876 [2024-11-19 10:34:26.563745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.876 [2024-11-19 10:34:26.564841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.876 [2024-11-19 10:34:26.564861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:36.876 Passthru0 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.876 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:36.876 { 00:06:36.876 "name": "Malloc0", 00:06:36.876 "aliases": [ 00:06:36.876 "10ffd0c5-6a9f-4cb4-9b06-6ce365bacf98" 00:06:36.876 ], 00:06:36.876 "product_name": "Malloc disk", 00:06:36.876 "block_size": 512, 00:06:36.876 "num_blocks": 16384, 00:06:36.876 "uuid": "10ffd0c5-6a9f-4cb4-9b06-6ce365bacf98", 00:06:36.876 "assigned_rate_limits": { 00:06:36.876 "rw_ios_per_sec": 0, 00:06:36.876 "rw_mbytes_per_sec": 0, 00:06:36.876 "r_mbytes_per_sec": 0, 00:06:36.876 "w_mbytes_per_sec": 0 00:06:36.876 }, 00:06:36.876 "claimed": true, 00:06:36.876 "claim_type": "exclusive_write", 00:06:36.876 "zoned": false, 00:06:36.876 "supported_io_types": { 00:06:36.876 "read": true, 00:06:36.876 "write": true, 00:06:36.876 "unmap": true, 00:06:36.876 "flush": true, 00:06:36.876 "reset": true, 00:06:36.876 "nvme_admin": false, 00:06:36.876 "nvme_io": false, 00:06:36.876 "nvme_io_md": false, 00:06:36.876 "write_zeroes": true, 00:06:36.876 "zcopy": true, 00:06:36.876 "get_zone_info": false, 00:06:36.876 "zone_management": false, 00:06:36.876 "zone_append": false, 00:06:36.876 "compare": false, 00:06:36.876 "compare_and_write": false, 00:06:36.876 "abort": true, 00:06:36.876 "seek_hole": false, 00:06:36.876 "seek_data": false, 00:06:36.876 "copy": true, 00:06:36.876 "nvme_iov_md": false 00:06:36.876 }, 00:06:36.876 "memory_domains": [ 00:06:36.876 { 00:06:36.876 "dma_device_id": "system", 00:06:36.876 "dma_device_type": 1 00:06:36.876 }, 00:06:36.876 { 00:06:36.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.876 "dma_device_type": 2 00:06:36.876 } 00:06:36.876 ], 00:06:36.876 "driver_specific": {} 00:06:36.876 }, 00:06:36.876 { 00:06:36.876 "name": "Passthru0", 00:06:36.876 "aliases": [ 00:06:36.876 "40241e40-562d-55c5-a501-52406d99e007" 00:06:36.876 ], 00:06:36.876 "product_name": "passthru", 00:06:36.876 "block_size": 512, 00:06:36.876 "num_blocks": 16384, 00:06:36.876 "uuid": "40241e40-562d-55c5-a501-52406d99e007", 00:06:36.876 "assigned_rate_limits": { 00:06:36.876 "rw_ios_per_sec": 0, 00:06:36.876 "rw_mbytes_per_sec": 0, 00:06:36.876 "r_mbytes_per_sec": 0, 00:06:36.876 "w_mbytes_per_sec": 0 00:06:36.876 }, 00:06:36.876 "claimed": false, 00:06:36.876 "zoned": false, 00:06:36.876 "supported_io_types": { 00:06:36.876 "read": true, 00:06:36.876 "write": true, 00:06:36.876 "unmap": true, 00:06:36.876 "flush": true, 00:06:36.876 "reset": true, 00:06:36.876 "nvme_admin": false, 00:06:36.876 "nvme_io": false, 00:06:36.876 "nvme_io_md": false, 00:06:36.876 "write_zeroes": true, 00:06:36.876 "zcopy": true, 00:06:36.876 "get_zone_info": false, 00:06:36.876 "zone_management": false, 00:06:36.876 "zone_append": false, 00:06:36.876 "compare": false, 00:06:36.876 "compare_and_write": false, 00:06:36.876 "abort": true, 00:06:36.876 "seek_hole": false, 00:06:36.876 "seek_data": false, 00:06:36.876 "copy": true, 00:06:36.876 "nvme_iov_md": false 00:06:36.876 }, 00:06:36.876 "memory_domains": [ 00:06:36.876 { 00:06:36.876 "dma_device_id": "system", 00:06:36.877 "dma_device_type": 1 00:06:36.877 }, 00:06:36.877 { 00:06:36.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.877 "dma_device_type": 2 00:06:36.877 } 00:06:36.877 ], 00:06:36.877 "driver_specific": { 00:06:36.877 "passthru": { 00:06:36.877 "name": "Passthru0", 00:06:36.877 "base_bdev_name": "Malloc0" 00:06:36.877 } 00:06:36.877 } 00:06:36.877 } 00:06:36.877 ]' 00:06:36.877 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:36.877 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:36.877 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:36.877 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.877 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.877 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.877 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:36.877 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.877 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.877 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.877 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:36.877 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.877 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.877 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.136 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:37.136 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:37.136 10:34:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:37.136 00:06:37.136 real 0m0.269s 00:06:37.136 user 0m0.173s 00:06:37.136 sys 0m0.037s 00:06:37.136 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.136 10:34:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.136 ************************************ 00:06:37.136 END TEST rpc_integrity 00:06:37.136 ************************************ 00:06:37.136 10:34:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:37.136 10:34:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.136 10:34:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.136 10:34:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.136 ************************************ 00:06:37.136 START TEST rpc_plugins 00:06:37.136 ************************************ 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:37.136 10:34:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.136 10:34:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:37.136 10:34:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.136 10:34:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:37.136 { 00:06:37.136 "name": "Malloc1", 00:06:37.136 "aliases": [ 00:06:37.136 "074f6596-0fff-4ee2-acaa-6b80c9c73716" 00:06:37.136 ], 00:06:37.136 "product_name": "Malloc disk", 00:06:37.136 "block_size": 4096, 00:06:37.136 "num_blocks": 256, 00:06:37.136 "uuid": "074f6596-0fff-4ee2-acaa-6b80c9c73716", 00:06:37.136 "assigned_rate_limits": { 00:06:37.136 "rw_ios_per_sec": 0, 00:06:37.136 "rw_mbytes_per_sec": 0, 00:06:37.136 "r_mbytes_per_sec": 0, 00:06:37.136 "w_mbytes_per_sec": 0 00:06:37.136 }, 00:06:37.136 "claimed": false, 00:06:37.136 "zoned": false, 00:06:37.136 "supported_io_types": { 00:06:37.136 "read": true, 00:06:37.136 "write": true, 00:06:37.136 "unmap": true, 00:06:37.136 "flush": true, 00:06:37.136 "reset": true, 00:06:37.136 "nvme_admin": false, 00:06:37.136 "nvme_io": false, 00:06:37.136 "nvme_io_md": false, 00:06:37.136 "write_zeroes": true, 00:06:37.136 "zcopy": true, 00:06:37.136 "get_zone_info": false, 00:06:37.136 "zone_management": false, 00:06:37.136 "zone_append": false, 00:06:37.136 "compare": false, 00:06:37.136 "compare_and_write": false, 00:06:37.136 "abort": true, 00:06:37.136 "seek_hole": false, 00:06:37.136 "seek_data": false, 00:06:37.136 "copy": true, 00:06:37.136 "nvme_iov_md": false 00:06:37.136 }, 00:06:37.136 "memory_domains": [ 00:06:37.136 { 00:06:37.136 "dma_device_id": "system", 00:06:37.136 "dma_device_type": 1 00:06:37.136 }, 00:06:37.136 { 00:06:37.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.136 "dma_device_type": 2 00:06:37.136 } 00:06:37.136 ], 00:06:37.136 "driver_specific": {} 00:06:37.136 } 00:06:37.136 ]' 00:06:37.136 10:34:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:37.136 10:34:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:37.136 10:34:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.136 10:34:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.136 10:34:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:37.136 10:34:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:37.136 10:34:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:37.136 00:06:37.136 real 0m0.141s 00:06:37.136 user 0m0.084s 00:06:37.136 sys 0m0.022s 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.136 10:34:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.136 ************************************ 00:06:37.136 END TEST rpc_plugins 00:06:37.136 ************************************ 00:06:37.395 10:34:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:37.395 10:34:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.395 10:34:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.395 10:34:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.395 ************************************ 00:06:37.395 START TEST rpc_trace_cmd_test 00:06:37.395 ************************************ 00:06:37.395 10:34:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:37.395 10:34:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:37.395 10:34:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:37.395 10:34:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.395 10:34:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.395 10:34:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.395 10:34:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:37.395 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3733350", 00:06:37.395 "tpoint_group_mask": "0x8", 00:06:37.395 "iscsi_conn": { 00:06:37.395 "mask": "0x2", 00:06:37.395 "tpoint_mask": "0x0" 00:06:37.395 }, 00:06:37.395 "scsi": { 00:06:37.395 "mask": "0x4", 00:06:37.395 "tpoint_mask": "0x0" 00:06:37.395 }, 00:06:37.395 "bdev": { 00:06:37.395 "mask": "0x8", 00:06:37.396 "tpoint_mask": "0xffffffffffffffff" 00:06:37.396 }, 00:06:37.396 "nvmf_rdma": { 00:06:37.396 "mask": "0x10", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "nvmf_tcp": { 00:06:37.396 "mask": "0x20", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "ftl": { 00:06:37.396 "mask": "0x40", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "blobfs": { 00:06:37.396 "mask": "0x80", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "dsa": { 00:06:37.396 "mask": "0x200", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "thread": { 00:06:37.396 "mask": "0x400", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "nvme_pcie": { 00:06:37.396 "mask": "0x800", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "iaa": { 00:06:37.396 "mask": "0x1000", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "nvme_tcp": { 00:06:37.396 "mask": "0x2000", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "bdev_nvme": { 00:06:37.396 "mask": "0x4000", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "sock": { 00:06:37.396 "mask": "0x8000", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "blob": { 00:06:37.396 "mask": "0x10000", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "bdev_raid": { 00:06:37.396 "mask": "0x20000", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 }, 00:06:37.396 "scheduler": { 00:06:37.396 "mask": "0x40000", 00:06:37.396 "tpoint_mask": "0x0" 00:06:37.396 } 00:06:37.396 }' 00:06:37.396 10:34:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:37.396 10:34:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:37.396 10:34:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:37.396 10:34:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:37.396 10:34:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:37.396 10:34:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:37.396 10:34:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:37.396 10:34:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:37.396 10:34:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:37.655 10:34:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:37.655 00:06:37.655 real 0m0.218s 00:06:37.655 user 0m0.180s 00:06:37.655 sys 0m0.029s 00:06:37.655 10:34:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.655 10:34:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.655 ************************************ 00:06:37.655 END TEST rpc_trace_cmd_test 00:06:37.655 ************************************ 00:06:37.655 10:34:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:37.655 10:34:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:37.655 10:34:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:37.655 10:34:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.655 10:34:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.655 10:34:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.655 ************************************ 00:06:37.655 START TEST rpc_daemon_integrity 00:06:37.655 ************************************ 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:37.655 { 00:06:37.655 "name": "Malloc2", 00:06:37.655 "aliases": [ 00:06:37.655 "f69cc524-9d81-4e62-a1a0-3b3244cd9e7a" 00:06:37.655 ], 00:06:37.655 "product_name": "Malloc disk", 00:06:37.655 "block_size": 512, 00:06:37.655 "num_blocks": 16384, 00:06:37.655 "uuid": "f69cc524-9d81-4e62-a1a0-3b3244cd9e7a", 00:06:37.655 "assigned_rate_limits": { 00:06:37.655 "rw_ios_per_sec": 0, 00:06:37.655 "rw_mbytes_per_sec": 0, 00:06:37.655 "r_mbytes_per_sec": 0, 00:06:37.655 "w_mbytes_per_sec": 0 00:06:37.655 }, 00:06:37.655 "claimed": false, 00:06:37.655 "zoned": false, 00:06:37.655 "supported_io_types": { 00:06:37.655 "read": true, 00:06:37.655 "write": true, 00:06:37.655 "unmap": true, 00:06:37.655 "flush": true, 00:06:37.655 "reset": true, 00:06:37.655 "nvme_admin": false, 00:06:37.655 "nvme_io": false, 00:06:37.655 "nvme_io_md": false, 00:06:37.655 "write_zeroes": true, 00:06:37.655 "zcopy": true, 00:06:37.655 "get_zone_info": false, 00:06:37.655 "zone_management": false, 00:06:37.655 "zone_append": false, 00:06:37.655 "compare": false, 00:06:37.655 "compare_and_write": false, 00:06:37.655 "abort": true, 00:06:37.655 "seek_hole": false, 00:06:37.655 "seek_data": false, 00:06:37.655 "copy": true, 00:06:37.655 "nvme_iov_md": false 00:06:37.655 }, 00:06:37.655 "memory_domains": [ 00:06:37.655 { 00:06:37.655 "dma_device_id": "system", 00:06:37.655 "dma_device_type": 1 00:06:37.655 }, 00:06:37.655 { 00:06:37.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.655 "dma_device_type": 2 00:06:37.655 } 00:06:37.655 ], 00:06:37.655 "driver_specific": {} 00:06:37.655 } 00:06:37.655 ]' 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.655 [2024-11-19 10:34:27.405995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:37.655 [2024-11-19 10:34:27.406023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.655 [2024-11-19 10:34:27.406034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8f4b70 00:06:37.655 [2024-11-19 10:34:27.406040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.655 [2024-11-19 10:34:27.407004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.655 [2024-11-19 10:34:27.407025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:37.655 Passthru0 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.655 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:37.655 { 00:06:37.655 "name": "Malloc2", 00:06:37.655 "aliases": [ 00:06:37.655 "f69cc524-9d81-4e62-a1a0-3b3244cd9e7a" 00:06:37.655 ], 00:06:37.655 "product_name": "Malloc disk", 00:06:37.655 "block_size": 512, 00:06:37.655 "num_blocks": 16384, 00:06:37.655 "uuid": "f69cc524-9d81-4e62-a1a0-3b3244cd9e7a", 00:06:37.655 "assigned_rate_limits": { 00:06:37.655 "rw_ios_per_sec": 0, 00:06:37.655 "rw_mbytes_per_sec": 0, 00:06:37.656 "r_mbytes_per_sec": 0, 00:06:37.656 "w_mbytes_per_sec": 0 00:06:37.656 }, 00:06:37.656 "claimed": true, 00:06:37.656 "claim_type": "exclusive_write", 00:06:37.656 "zoned": false, 00:06:37.656 "supported_io_types": { 00:06:37.656 "read": true, 00:06:37.656 "write": true, 00:06:37.656 "unmap": true, 00:06:37.656 "flush": true, 00:06:37.656 "reset": true, 00:06:37.656 "nvme_admin": false, 00:06:37.656 "nvme_io": false, 00:06:37.656 "nvme_io_md": false, 00:06:37.656 "write_zeroes": true, 00:06:37.656 "zcopy": true, 00:06:37.656 "get_zone_info": false, 00:06:37.656 "zone_management": false, 00:06:37.656 "zone_append": false, 00:06:37.656 "compare": false, 00:06:37.656 "compare_and_write": false, 00:06:37.656 "abort": true, 00:06:37.656 "seek_hole": false, 00:06:37.656 "seek_data": false, 00:06:37.656 "copy": true, 00:06:37.656 "nvme_iov_md": false 00:06:37.656 }, 00:06:37.656 "memory_domains": [ 00:06:37.656 { 00:06:37.656 "dma_device_id": "system", 00:06:37.656 "dma_device_type": 1 00:06:37.656 }, 00:06:37.656 { 00:06:37.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.656 "dma_device_type": 2 00:06:37.656 } 00:06:37.656 ], 00:06:37.656 "driver_specific": {} 00:06:37.656 }, 00:06:37.656 { 00:06:37.656 "name": "Passthru0", 00:06:37.656 "aliases": [ 00:06:37.656 "51efadb9-b053-5dcb-b0de-7880e80c2e49" 00:06:37.656 ], 00:06:37.656 "product_name": "passthru", 00:06:37.656 "block_size": 512, 00:06:37.656 "num_blocks": 16384, 00:06:37.656 "uuid": "51efadb9-b053-5dcb-b0de-7880e80c2e49", 00:06:37.656 "assigned_rate_limits": { 00:06:37.656 "rw_ios_per_sec": 0, 00:06:37.656 "rw_mbytes_per_sec": 0, 00:06:37.656 "r_mbytes_per_sec": 0, 00:06:37.656 "w_mbytes_per_sec": 0 00:06:37.656 }, 00:06:37.656 "claimed": false, 00:06:37.656 "zoned": false, 00:06:37.656 "supported_io_types": { 00:06:37.656 "read": true, 00:06:37.656 "write": true, 00:06:37.656 "unmap": true, 00:06:37.656 "flush": true, 00:06:37.656 "reset": true, 00:06:37.656 "nvme_admin": false, 00:06:37.656 "nvme_io": false, 00:06:37.656 "nvme_io_md": false, 00:06:37.656 "write_zeroes": true, 00:06:37.656 "zcopy": true, 00:06:37.656 "get_zone_info": false, 00:06:37.656 "zone_management": false, 00:06:37.656 "zone_append": false, 00:06:37.656 "compare": false, 00:06:37.656 "compare_and_write": false, 00:06:37.656 "abort": true, 00:06:37.656 "seek_hole": false, 00:06:37.656 "seek_data": false, 00:06:37.656 "copy": true, 00:06:37.656 "nvme_iov_md": false 00:06:37.656 }, 00:06:37.656 "memory_domains": [ 00:06:37.656 { 00:06:37.656 "dma_device_id": "system", 00:06:37.656 "dma_device_type": 1 00:06:37.656 }, 00:06:37.656 { 00:06:37.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.656 "dma_device_type": 2 00:06:37.656 } 00:06:37.656 ], 00:06:37.656 "driver_specific": { 00:06:37.656 "passthru": { 00:06:37.656 "name": "Passthru0", 00:06:37.656 "base_bdev_name": "Malloc2" 00:06:37.656 } 00:06:37.656 } 00:06:37.656 } 00:06:37.656 ]' 00:06:37.656 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:37.915 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:37.915 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:37.916 00:06:37.916 real 0m0.275s 00:06:37.916 user 0m0.169s 00:06:37.916 sys 0m0.040s 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.916 10:34:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.916 ************************************ 00:06:37.916 END TEST rpc_daemon_integrity 00:06:37.916 ************************************ 00:06:37.916 10:34:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:37.916 10:34:27 rpc -- rpc/rpc.sh@84 -- # killprocess 3733350 00:06:37.916 10:34:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 3733350 ']' 00:06:37.916 10:34:27 rpc -- common/autotest_common.sh@958 -- # kill -0 3733350 00:06:37.916 10:34:27 rpc -- common/autotest_common.sh@959 -- # uname 00:06:37.916 10:34:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.916 10:34:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3733350 00:06:37.916 10:34:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.916 10:34:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.916 10:34:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3733350' 00:06:37.916 killing process with pid 3733350 00:06:37.916 10:34:27 rpc -- common/autotest_common.sh@973 -- # kill 3733350 00:06:37.916 10:34:27 rpc -- common/autotest_common.sh@978 -- # wait 3733350 00:06:38.175 00:06:38.175 real 0m2.097s 00:06:38.175 user 0m2.647s 00:06:38.175 sys 0m0.714s 00:06:38.175 10:34:27 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.175 10:34:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.175 ************************************ 00:06:38.175 END TEST rpc 00:06:38.175 ************************************ 00:06:38.434 10:34:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:38.434 10:34:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.434 10:34:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.434 10:34:27 -- common/autotest_common.sh@10 -- # set +x 00:06:38.434 ************************************ 00:06:38.434 START TEST skip_rpc 00:06:38.434 ************************************ 00:06:38.434 10:34:28 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:38.434 * Looking for test storage... 00:06:38.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:38.434 10:34:28 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.434 10:34:28 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.434 10:34:28 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.434 10:34:28 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.434 10:34:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.434 10:34:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.434 10:34:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.434 10:34:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.434 10:34:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.434 10:34:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.434 10:34:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.435 10:34:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:38.435 10:34:28 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.435 10:34:28 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.435 --rc genhtml_branch_coverage=1 00:06:38.435 --rc genhtml_function_coverage=1 00:06:38.435 --rc genhtml_legend=1 00:06:38.435 --rc geninfo_all_blocks=1 00:06:38.435 --rc geninfo_unexecuted_blocks=1 00:06:38.435 00:06:38.435 ' 00:06:38.435 10:34:28 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.435 --rc genhtml_branch_coverage=1 00:06:38.435 --rc genhtml_function_coverage=1 00:06:38.435 --rc genhtml_legend=1 00:06:38.435 --rc geninfo_all_blocks=1 00:06:38.435 --rc geninfo_unexecuted_blocks=1 00:06:38.435 00:06:38.435 ' 00:06:38.435 10:34:28 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.435 --rc genhtml_branch_coverage=1 00:06:38.435 --rc genhtml_function_coverage=1 00:06:38.435 --rc genhtml_legend=1 00:06:38.435 --rc geninfo_all_blocks=1 00:06:38.435 --rc geninfo_unexecuted_blocks=1 00:06:38.435 00:06:38.435 ' 00:06:38.435 10:34:28 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.435 --rc genhtml_branch_coverage=1 00:06:38.435 --rc genhtml_function_coverage=1 00:06:38.435 --rc genhtml_legend=1 00:06:38.435 --rc geninfo_all_blocks=1 00:06:38.435 --rc geninfo_unexecuted_blocks=1 00:06:38.435 00:06:38.435 ' 00:06:38.435 10:34:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:38.435 10:34:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:38.435 10:34:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:38.435 10:34:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.435 10:34:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.435 10:34:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.435 ************************************ 00:06:38.435 START TEST skip_rpc 00:06:38.435 ************************************ 00:06:38.435 10:34:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:38.435 10:34:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3733983 00:06:38.435 10:34:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.435 10:34:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:38.435 10:34:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:38.694 [2024-11-19 10:34:28.268515] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:06:38.694 [2024-11-19 10:34:28.268551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733983 ] 00:06:38.694 [2024-11-19 10:34:28.339175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.694 [2024-11-19 10:34:28.378711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3733983 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3733983 ']' 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3733983 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3733983 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3733983' 00:06:43.967 killing process with pid 3733983 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3733983 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3733983 00:06:43.967 00:06:43.967 real 0m5.366s 00:06:43.967 user 0m5.123s 00:06:43.967 sys 0m0.279s 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.967 10:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.967 ************************************ 00:06:43.967 END TEST skip_rpc 00:06:43.967 ************************************ 00:06:43.967 10:34:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:43.967 10:34:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.967 10:34:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.967 10:34:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.967 ************************************ 00:06:43.967 START TEST skip_rpc_with_json 00:06:43.967 ************************************ 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3734930 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3734930 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3734930 ']' 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.967 10:34:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.967 [2024-11-19 10:34:33.711613] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:06:43.968 [2024-11-19 10:34:33.711655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3734930 ] 00:06:44.227 [2024-11-19 10:34:33.787761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.227 [2024-11-19 10:34:33.829595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:44.794 [2024-11-19 10:34:34.544757] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:44.794 request: 00:06:44.794 { 00:06:44.794 "trtype": "tcp", 00:06:44.794 "method": "nvmf_get_transports", 00:06:44.794 "req_id": 1 00:06:44.794 } 00:06:44.794 Got JSON-RPC error response 00:06:44.794 response: 00:06:44.794 { 00:06:44.794 "code": -19, 00:06:44.794 "message": "No such device" 00:06:44.794 } 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:44.794 [2024-11-19 10:34:34.556860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.794 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.054 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.054 10:34:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:45.054 { 00:06:45.054 "subsystems": [ 00:06:45.054 { 00:06:45.054 "subsystem": "fsdev", 00:06:45.054 "config": [ 00:06:45.054 { 00:06:45.054 "method": "fsdev_set_opts", 00:06:45.054 "params": { 00:06:45.054 "fsdev_io_pool_size": 65535, 00:06:45.054 "fsdev_io_cache_size": 256 00:06:45.054 } 00:06:45.054 } 00:06:45.054 ] 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "subsystem": "vfio_user_target", 00:06:45.054 "config": null 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "subsystem": "keyring", 00:06:45.054 "config": [] 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "subsystem": "iobuf", 00:06:45.054 "config": [ 00:06:45.054 { 00:06:45.054 "method": "iobuf_set_options", 00:06:45.054 "params": { 00:06:45.054 "small_pool_count": 8192, 00:06:45.054 "large_pool_count": 1024, 00:06:45.054 "small_bufsize": 8192, 00:06:45.054 "large_bufsize": 135168, 00:06:45.054 "enable_numa": false 00:06:45.054 } 00:06:45.054 } 00:06:45.054 ] 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "subsystem": "sock", 00:06:45.054 "config": [ 00:06:45.054 { 00:06:45.054 "method": "sock_set_default_impl", 00:06:45.054 "params": { 00:06:45.054 "impl_name": "posix" 00:06:45.054 } 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "method": "sock_impl_set_options", 00:06:45.054 "params": { 00:06:45.054 "impl_name": "ssl", 00:06:45.054 "recv_buf_size": 4096, 00:06:45.054 "send_buf_size": 4096, 00:06:45.054 "enable_recv_pipe": true, 00:06:45.054 "enable_quickack": false, 00:06:45.054 "enable_placement_id": 0, 00:06:45.054 "enable_zerocopy_send_server": true, 00:06:45.054 "enable_zerocopy_send_client": false, 00:06:45.054 "zerocopy_threshold": 0, 00:06:45.054 "tls_version": 0, 00:06:45.054 "enable_ktls": false 00:06:45.054 } 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "method": "sock_impl_set_options", 00:06:45.054 "params": { 00:06:45.054 "impl_name": "posix", 00:06:45.054 "recv_buf_size": 2097152, 00:06:45.054 "send_buf_size": 2097152, 00:06:45.054 "enable_recv_pipe": true, 00:06:45.054 "enable_quickack": false, 00:06:45.054 "enable_placement_id": 0, 00:06:45.054 "enable_zerocopy_send_server": true, 00:06:45.054 "enable_zerocopy_send_client": false, 00:06:45.054 "zerocopy_threshold": 0, 00:06:45.054 "tls_version": 0, 00:06:45.054 "enable_ktls": false 00:06:45.054 } 00:06:45.054 } 00:06:45.054 ] 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "subsystem": "vmd", 00:06:45.054 "config": [] 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "subsystem": "accel", 00:06:45.054 "config": [ 00:06:45.054 { 00:06:45.054 "method": "accel_set_options", 00:06:45.054 "params": { 00:06:45.054 "small_cache_size": 128, 00:06:45.054 "large_cache_size": 16, 00:06:45.054 "task_count": 2048, 00:06:45.054 "sequence_count": 2048, 00:06:45.054 "buf_count": 2048 00:06:45.054 } 00:06:45.054 } 00:06:45.054 ] 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "subsystem": "bdev", 00:06:45.054 "config": [ 00:06:45.054 { 00:06:45.054 "method": "bdev_set_options", 00:06:45.054 "params": { 00:06:45.054 "bdev_io_pool_size": 65535, 00:06:45.054 "bdev_io_cache_size": 256, 00:06:45.054 "bdev_auto_examine": true, 00:06:45.054 "iobuf_small_cache_size": 128, 00:06:45.054 "iobuf_large_cache_size": 16 00:06:45.054 } 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "method": "bdev_raid_set_options", 00:06:45.054 "params": { 00:06:45.054 "process_window_size_kb": 1024, 00:06:45.054 "process_max_bandwidth_mb_sec": 0 00:06:45.054 } 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "method": "bdev_iscsi_set_options", 00:06:45.054 "params": { 00:06:45.054 "timeout_sec": 30 00:06:45.054 } 00:06:45.054 }, 00:06:45.054 { 00:06:45.054 "method": "bdev_nvme_set_options", 00:06:45.054 "params": { 00:06:45.054 "action_on_timeout": "none", 00:06:45.054 "timeout_us": 0, 00:06:45.054 "timeout_admin_us": 0, 00:06:45.054 "keep_alive_timeout_ms": 10000, 00:06:45.054 "arbitration_burst": 0, 00:06:45.054 "low_priority_weight": 0, 00:06:45.054 "medium_priority_weight": 0, 00:06:45.054 "high_priority_weight": 0, 00:06:45.054 "nvme_adminq_poll_period_us": 10000, 00:06:45.054 "nvme_ioq_poll_period_us": 0, 00:06:45.054 "io_queue_requests": 0, 00:06:45.054 "delay_cmd_submit": true, 00:06:45.054 "transport_retry_count": 4, 00:06:45.054 "bdev_retry_count": 3, 00:06:45.054 "transport_ack_timeout": 0, 00:06:45.054 "ctrlr_loss_timeout_sec": 0, 00:06:45.054 "reconnect_delay_sec": 0, 00:06:45.054 "fast_io_fail_timeout_sec": 0, 00:06:45.055 "disable_auto_failback": false, 00:06:45.055 "generate_uuids": false, 00:06:45.055 "transport_tos": 0, 00:06:45.055 "nvme_error_stat": false, 00:06:45.055 "rdma_srq_size": 0, 00:06:45.055 "io_path_stat": false, 00:06:45.055 "allow_accel_sequence": false, 00:06:45.055 "rdma_max_cq_size": 0, 00:06:45.055 "rdma_cm_event_timeout_ms": 0, 00:06:45.055 "dhchap_digests": [ 00:06:45.055 "sha256", 00:06:45.055 "sha384", 00:06:45.055 "sha512" 00:06:45.055 ], 00:06:45.055 "dhchap_dhgroups": [ 00:06:45.055 "null", 00:06:45.055 "ffdhe2048", 00:06:45.055 "ffdhe3072", 00:06:45.055 "ffdhe4096", 00:06:45.055 "ffdhe6144", 00:06:45.055 "ffdhe8192" 00:06:45.055 ] 00:06:45.055 } 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "method": "bdev_nvme_set_hotplug", 00:06:45.055 "params": { 00:06:45.055 "period_us": 100000, 00:06:45.055 "enable": false 00:06:45.055 } 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "method": "bdev_wait_for_examine" 00:06:45.055 } 00:06:45.055 ] 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "subsystem": "scsi", 00:06:45.055 "config": null 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "subsystem": "scheduler", 00:06:45.055 "config": [ 00:06:45.055 { 00:06:45.055 "method": "framework_set_scheduler", 00:06:45.055 "params": { 00:06:45.055 "name": "static" 00:06:45.055 } 00:06:45.055 } 00:06:45.055 ] 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "subsystem": "vhost_scsi", 00:06:45.055 "config": [] 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "subsystem": "vhost_blk", 00:06:45.055 "config": [] 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "subsystem": "ublk", 00:06:45.055 "config": [] 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "subsystem": "nbd", 00:06:45.055 "config": [] 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "subsystem": "nvmf", 00:06:45.055 "config": [ 00:06:45.055 { 00:06:45.055 "method": "nvmf_set_config", 00:06:45.055 "params": { 00:06:45.055 "discovery_filter": "match_any", 00:06:45.055 "admin_cmd_passthru": { 00:06:45.055 "identify_ctrlr": false 00:06:45.055 }, 00:06:45.055 "dhchap_digests": [ 00:06:45.055 "sha256", 00:06:45.055 "sha384", 00:06:45.055 "sha512" 00:06:45.055 ], 00:06:45.055 "dhchap_dhgroups": [ 00:06:45.055 "null", 00:06:45.055 "ffdhe2048", 00:06:45.055 "ffdhe3072", 00:06:45.055 "ffdhe4096", 00:06:45.055 "ffdhe6144", 00:06:45.055 "ffdhe8192" 00:06:45.055 ] 00:06:45.055 } 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "method": "nvmf_set_max_subsystems", 00:06:45.055 "params": { 00:06:45.055 "max_subsystems": 1024 00:06:45.055 } 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "method": "nvmf_set_crdt", 00:06:45.055 "params": { 00:06:45.055 "crdt1": 0, 00:06:45.055 "crdt2": 0, 00:06:45.055 "crdt3": 0 00:06:45.055 } 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "method": "nvmf_create_transport", 00:06:45.055 "params": { 00:06:45.055 "trtype": "TCP", 00:06:45.055 "max_queue_depth": 128, 00:06:45.055 "max_io_qpairs_per_ctrlr": 127, 00:06:45.055 "in_capsule_data_size": 4096, 00:06:45.055 "max_io_size": 131072, 00:06:45.055 "io_unit_size": 131072, 00:06:45.055 "max_aq_depth": 128, 00:06:45.055 "num_shared_buffers": 511, 00:06:45.055 "buf_cache_size": 4294967295, 00:06:45.055 "dif_insert_or_strip": false, 00:06:45.055 "zcopy": false, 00:06:45.055 "c2h_success": true, 00:06:45.055 "sock_priority": 0, 00:06:45.055 "abort_timeout_sec": 1, 00:06:45.055 "ack_timeout": 0, 00:06:45.055 "data_wr_pool_size": 0 00:06:45.055 } 00:06:45.055 } 00:06:45.055 ] 00:06:45.055 }, 00:06:45.055 { 00:06:45.055 "subsystem": "iscsi", 00:06:45.055 "config": [ 00:06:45.055 { 00:06:45.055 "method": "iscsi_set_options", 00:06:45.055 "params": { 00:06:45.055 "node_base": "iqn.2016-06.io.spdk", 00:06:45.055 "max_sessions": 128, 00:06:45.055 "max_connections_per_session": 2, 00:06:45.055 "max_queue_depth": 64, 00:06:45.055 "default_time2wait": 2, 00:06:45.055 "default_time2retain": 20, 00:06:45.055 "first_burst_length": 8192, 00:06:45.055 "immediate_data": true, 00:06:45.055 "allow_duplicated_isid": false, 00:06:45.055 "error_recovery_level": 0, 00:06:45.055 "nop_timeout": 60, 00:06:45.055 "nop_in_interval": 30, 00:06:45.055 "disable_chap": false, 00:06:45.055 "require_chap": false, 00:06:45.055 "mutual_chap": false, 00:06:45.055 "chap_group": 0, 00:06:45.055 "max_large_datain_per_connection": 64, 00:06:45.055 "max_r2t_per_connection": 4, 00:06:45.055 "pdu_pool_size": 36864, 00:06:45.055 "immediate_data_pool_size": 16384, 00:06:45.055 "data_out_pool_size": 2048 00:06:45.055 } 00:06:45.055 } 00:06:45.055 ] 00:06:45.055 } 00:06:45.055 ] 00:06:45.055 } 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3734930 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3734930 ']' 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3734930 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3734930 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3734930' 00:06:45.055 killing process with pid 3734930 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3734930 00:06:45.055 10:34:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3734930 00:06:45.315 10:34:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3735174 00:06:45.315 10:34:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:45.315 10:34:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:50.588 10:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3735174 00:06:50.588 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3735174 ']' 00:06:50.588 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3735174 00:06:50.588 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:50.588 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.588 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3735174 00:06:50.588 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.588 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.588 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3735174' 00:06:50.588 killing process with pid 3735174 00:06:50.588 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3735174 00:06:50.588 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3735174 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:50.848 00:06:50.848 real 0m6.789s 00:06:50.848 user 0m6.636s 00:06:50.848 sys 0m0.634s 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:50.848 ************************************ 00:06:50.848 END TEST skip_rpc_with_json 00:06:50.848 ************************************ 00:06:50.848 10:34:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:50.848 10:34:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.848 10:34:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.848 10:34:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.848 ************************************ 00:06:50.848 START TEST skip_rpc_with_delay 00:06:50.848 ************************************ 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:50.848 [2024-11-19 10:34:40.572638] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.848 00:06:50.848 real 0m0.070s 00:06:50.848 user 0m0.049s 00:06:50.848 sys 0m0.020s 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.848 10:34:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:50.848 ************************************ 00:06:50.848 END TEST skip_rpc_with_delay 00:06:50.848 ************************************ 00:06:50.848 10:34:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:50.848 10:34:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:50.848 10:34:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:50.848 10:34:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.848 10:34:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.848 10:34:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.108 ************************************ 00:06:51.108 START TEST exit_on_failed_rpc_init 00:06:51.108 ************************************ 00:06:51.108 10:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:51.108 10:34:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3736147 00:06:51.108 10:34:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3736147 00:06:51.108 10:34:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.108 10:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3736147 ']' 00:06:51.108 10:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.108 10:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.108 10:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.108 10:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.108 10:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:51.108 [2024-11-19 10:34:40.712104] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:06:51.108 [2024-11-19 10:34:40.712143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736147 ] 00:06:51.108 [2024-11-19 10:34:40.787543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.108 [2024-11-19 10:34:40.828804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.048 [2024-11-19 10:34:41.579404] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:06:52.048 [2024-11-19 10:34:41.579448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736243 ] 00:06:52.048 [2024-11-19 10:34:41.654418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.048 [2024-11-19 10:34:41.694378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.048 [2024-11-19 10:34:41.694432] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:52.048 [2024-11-19 10:34:41.694442] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:52.048 [2024-11-19 10:34:41.694450] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3736147 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3736147 ']' 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3736147 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3736147 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3736147' 00:06:52.048 killing process with pid 3736147 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3736147 00:06:52.048 10:34:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3736147 00:06:52.308 00:06:52.308 real 0m1.426s 00:06:52.308 user 0m1.632s 00:06:52.308 sys 0m0.407s 00:06:52.308 10:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.308 10:34:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:52.308 ************************************ 00:06:52.308 END TEST exit_on_failed_rpc_init 00:06:52.308 ************************************ 00:06:52.566 10:34:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:52.567 00:06:52.567 real 0m14.117s 00:06:52.567 user 0m13.665s 00:06:52.567 sys 0m1.612s 00:06:52.567 10:34:42 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.567 10:34:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.567 ************************************ 00:06:52.567 END TEST skip_rpc 00:06:52.567 ************************************ 00:06:52.567 10:34:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:52.567 10:34:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.567 10:34:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.567 10:34:42 -- common/autotest_common.sh@10 -- # set +x 00:06:52.567 ************************************ 00:06:52.567 START TEST rpc_client 00:06:52.567 ************************************ 00:06:52.567 10:34:42 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:52.567 * Looking for test storage... 00:06:52.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:52.567 10:34:42 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.567 10:34:42 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.567 10:34:42 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.567 10:34:42 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.567 10:34:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.567 10:34:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.567 10:34:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.567 10:34:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.567 10:34:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.567 10:34:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.567 10:34:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.825 10:34:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.825 10:34:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.825 10:34:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.825 10:34:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.825 10:34:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:52.825 10:34:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:52.825 10:34:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.825 10:34:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.825 10:34:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:52.825 10:34:42 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:52.825 10:34:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.826 10:34:42 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:52.826 10:34:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.826 10:34:42 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:52.826 10:34:42 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:52.826 10:34:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.826 10:34:42 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:52.826 10:34:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.826 10:34:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.826 10:34:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.826 10:34:42 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:52.826 10:34:42 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.826 10:34:42 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.826 --rc genhtml_branch_coverage=1 00:06:52.826 --rc genhtml_function_coverage=1 00:06:52.826 --rc genhtml_legend=1 00:06:52.826 --rc geninfo_all_blocks=1 00:06:52.826 --rc geninfo_unexecuted_blocks=1 00:06:52.826 00:06:52.826 ' 00:06:52.826 10:34:42 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.826 --rc genhtml_branch_coverage=1 00:06:52.826 --rc genhtml_function_coverage=1 00:06:52.826 --rc genhtml_legend=1 00:06:52.826 --rc geninfo_all_blocks=1 00:06:52.826 --rc geninfo_unexecuted_blocks=1 00:06:52.826 00:06:52.826 ' 00:06:52.826 10:34:42 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.826 --rc genhtml_branch_coverage=1 00:06:52.826 --rc genhtml_function_coverage=1 00:06:52.826 --rc genhtml_legend=1 00:06:52.826 --rc geninfo_all_blocks=1 00:06:52.826 --rc geninfo_unexecuted_blocks=1 00:06:52.826 00:06:52.826 ' 00:06:52.826 10:34:42 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.826 --rc genhtml_branch_coverage=1 00:06:52.826 --rc genhtml_function_coverage=1 00:06:52.826 --rc genhtml_legend=1 00:06:52.826 --rc geninfo_all_blocks=1 00:06:52.826 --rc geninfo_unexecuted_blocks=1 00:06:52.826 00:06:52.826 ' 00:06:52.826 10:34:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:52.826 OK 00:06:52.826 10:34:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:52.826 00:06:52.826 real 0m0.197s 00:06:52.826 user 0m0.124s 00:06:52.826 sys 0m0.086s 00:06:52.826 10:34:42 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.826 10:34:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:52.826 ************************************ 00:06:52.826 END TEST rpc_client 00:06:52.826 ************************************ 00:06:52.826 10:34:42 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:52.826 10:34:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.826 10:34:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.826 10:34:42 -- common/autotest_common.sh@10 -- # set +x 00:06:52.826 ************************************ 00:06:52.826 START TEST json_config 00:06:52.826 ************************************ 00:06:52.826 10:34:42 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:52.826 10:34:42 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.826 10:34:42 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.826 10:34:42 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.826 10:34:42 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.826 10:34:42 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.826 10:34:42 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.826 10:34:42 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.826 10:34:42 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.826 10:34:42 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.826 10:34:42 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.826 10:34:42 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.826 10:34:42 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.826 10:34:42 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.826 10:34:42 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.826 10:34:42 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.826 10:34:42 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:52.826 10:34:42 json_config -- scripts/common.sh@345 -- # : 1 00:06:52.826 10:34:42 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.826 10:34:42 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.826 10:34:42 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:52.826 10:34:42 json_config -- scripts/common.sh@353 -- # local d=1 00:06:52.826 10:34:42 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.826 10:34:42 json_config -- scripts/common.sh@355 -- # echo 1 00:06:52.826 10:34:42 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.826 10:34:42 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:52.826 10:34:42 json_config -- scripts/common.sh@353 -- # local d=2 00:06:52.826 10:34:42 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.826 10:34:42 json_config -- scripts/common.sh@355 -- # echo 2 00:06:52.826 10:34:42 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.826 10:34:42 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.826 10:34:42 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.826 10:34:42 json_config -- scripts/common.sh@368 -- # return 0 00:06:52.826 10:34:42 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.826 10:34:42 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.826 --rc genhtml_branch_coverage=1 00:06:52.826 --rc genhtml_function_coverage=1 00:06:52.826 --rc genhtml_legend=1 00:06:52.826 --rc geninfo_all_blocks=1 00:06:52.826 --rc geninfo_unexecuted_blocks=1 00:06:52.826 00:06:52.826 ' 00:06:52.826 10:34:42 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.826 --rc genhtml_branch_coverage=1 00:06:52.826 --rc genhtml_function_coverage=1 00:06:52.826 --rc genhtml_legend=1 00:06:52.826 --rc geninfo_all_blocks=1 00:06:52.826 --rc geninfo_unexecuted_blocks=1 00:06:52.826 00:06:52.826 ' 00:06:52.826 10:34:42 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.826 --rc genhtml_branch_coverage=1 00:06:52.826 --rc genhtml_function_coverage=1 00:06:52.826 --rc genhtml_legend=1 00:06:52.826 --rc geninfo_all_blocks=1 00:06:52.826 --rc geninfo_unexecuted_blocks=1 00:06:52.826 00:06:52.826 ' 00:06:52.826 10:34:42 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.826 --rc genhtml_branch_coverage=1 00:06:52.826 --rc genhtml_function_coverage=1 00:06:52.826 --rc genhtml_legend=1 00:06:52.826 --rc geninfo_all_blocks=1 00:06:52.826 --rc geninfo_unexecuted_blocks=1 00:06:52.826 00:06:52.826 ' 00:06:52.826 10:34:42 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.826 10:34:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.085 10:34:42 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.085 10:34:42 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.085 10:34:42 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.085 10:34:42 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.085 10:34:42 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.086 10:34:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.086 10:34:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.086 10:34:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.086 10:34:42 json_config -- paths/export.sh@5 -- # export PATH 00:06:53.086 10:34:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.086 10:34:42 json_config -- nvmf/common.sh@51 -- # : 0 00:06:53.086 10:34:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.086 10:34:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.086 10:34:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.086 10:34:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.086 10:34:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.086 10:34:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.086 10:34:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.086 10:34:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.086 10:34:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:53.086 INFO: JSON configuration test init 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:53.086 10:34:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.086 10:34:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:53.086 10:34:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.086 10:34:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.086 10:34:42 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:53.086 10:34:42 json_config -- json_config/common.sh@9 -- # local app=target 00:06:53.086 10:34:42 json_config -- json_config/common.sh@10 -- # shift 00:06:53.086 10:34:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:53.086 10:34:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:53.086 10:34:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:53.086 10:34:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.086 10:34:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.086 10:34:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3736519 00:06:53.086 10:34:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:53.086 Waiting for target to run... 00:06:53.086 10:34:42 json_config -- json_config/common.sh@25 -- # waitforlisten 3736519 /var/tmp/spdk_tgt.sock 00:06:53.086 10:34:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:53.086 10:34:42 json_config -- common/autotest_common.sh@835 -- # '[' -z 3736519 ']' 00:06:53.086 10:34:42 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:53.086 10:34:42 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.086 10:34:42 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:53.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:53.086 10:34:42 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.086 10:34:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.086 [2024-11-19 10:34:42.712747] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:06:53.086 [2024-11-19 10:34:42.712792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736519 ] 00:06:53.346 [2024-11-19 10:34:43.006963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.346 [2024-11-19 10:34:43.041370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.914 10:34:43 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.914 10:34:43 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:53.914 10:34:43 json_config -- json_config/common.sh@26 -- # echo '' 00:06:53.914 00:06:53.914 10:34:43 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:53.914 10:34:43 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:53.914 10:34:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.914 10:34:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.914 10:34:43 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:53.915 10:34:43 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:53.915 10:34:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:53.915 10:34:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.915 10:34:43 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:53.915 10:34:43 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:53.915 10:34:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:57.203 10:34:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.203 10:34:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:57.203 10:34:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@54 -- # sort 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:57.203 10:34:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.203 10:34:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:57.203 10:34:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.203 10:34:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:57.203 10:34:46 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:57.203 10:34:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:57.462 MallocForNvmf0 00:06:57.462 10:34:47 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:57.462 10:34:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:57.721 MallocForNvmf1 00:06:57.721 10:34:47 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:57.721 10:34:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:57.721 [2024-11-19 10:34:47.471760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.721 10:34:47 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:57.721 10:34:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:57.979 10:34:47 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:57.979 10:34:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:58.238 10:34:47 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:58.238 10:34:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:58.238 10:34:48 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:58.238 10:34:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:58.496 [2024-11-19 10:34:48.165959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:58.497 10:34:48 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:58.497 10:34:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.497 10:34:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.497 10:34:48 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:58.497 10:34:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.497 10:34:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.497 10:34:48 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:58.497 10:34:48 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:58.497 10:34:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:58.755 MallocBdevForConfigChangeCheck 00:06:58.755 10:34:48 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:58.755 10:34:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.755 10:34:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.755 10:34:48 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:58.756 10:34:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:59.014 10:34:48 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:59.014 INFO: shutting down applications... 00:06:59.014 10:34:48 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:59.014 10:34:48 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:59.014 10:34:48 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:59.014 10:34:48 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:01.548 Calling clear_iscsi_subsystem 00:07:01.548 Calling clear_nvmf_subsystem 00:07:01.548 Calling clear_nbd_subsystem 00:07:01.548 Calling clear_ublk_subsystem 00:07:01.548 Calling clear_vhost_blk_subsystem 00:07:01.548 Calling clear_vhost_scsi_subsystem 00:07:01.548 Calling clear_bdev_subsystem 00:07:01.548 10:34:50 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:01.548 10:34:50 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:01.548 10:34:50 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:01.548 10:34:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:01.548 10:34:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:01.548 10:34:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:01.548 10:34:51 json_config -- json_config/json_config.sh@352 -- # break 00:07:01.548 10:34:51 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:01.548 10:34:51 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:01.548 10:34:51 json_config -- json_config/common.sh@31 -- # local app=target 00:07:01.548 10:34:51 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:01.548 10:34:51 json_config -- json_config/common.sh@35 -- # [[ -n 3736519 ]] 00:07:01.548 10:34:51 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3736519 00:07:01.548 10:34:51 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:01.548 10:34:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:01.548 10:34:51 json_config -- json_config/common.sh@41 -- # kill -0 3736519 00:07:01.548 10:34:51 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:02.117 10:34:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:02.117 10:34:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:02.117 10:34:51 json_config -- json_config/common.sh@41 -- # kill -0 3736519 00:07:02.117 10:34:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:02.117 10:34:51 json_config -- json_config/common.sh@43 -- # break 00:07:02.117 10:34:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:02.117 10:34:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:02.117 SPDK target shutdown done 00:07:02.117 10:34:51 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:02.117 INFO: relaunching applications... 00:07:02.117 10:34:51 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:02.117 10:34:51 json_config -- json_config/common.sh@9 -- # local app=target 00:07:02.117 10:34:51 json_config -- json_config/common.sh@10 -- # shift 00:07:02.117 10:34:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:02.117 10:34:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:02.117 10:34:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:02.117 10:34:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:02.117 10:34:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:02.117 10:34:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3738255 00:07:02.117 10:34:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:02.117 Waiting for target to run... 00:07:02.117 10:34:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:02.117 10:34:51 json_config -- json_config/common.sh@25 -- # waitforlisten 3738255 /var/tmp/spdk_tgt.sock 00:07:02.117 10:34:51 json_config -- common/autotest_common.sh@835 -- # '[' -z 3738255 ']' 00:07:02.117 10:34:51 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:02.117 10:34:51 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.117 10:34:51 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:02.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:02.117 10:34:51 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.117 10:34:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.117 [2024-11-19 10:34:51.851099] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:02.117 [2024-11-19 10:34:51.851149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738255 ] 00:07:02.376 [2024-11-19 10:34:52.139978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.635 [2024-11-19 10:34:52.174696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.925 [2024-11-19 10:34:55.204420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.925 [2024-11-19 10:34:55.236774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:05.925 10:34:55 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.925 10:34:55 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:05.925 10:34:55 json_config -- json_config/common.sh@26 -- # echo '' 00:07:05.925 00:07:05.925 10:34:55 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:05.925 10:34:55 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:05.925 INFO: Checking if target configuration is the same... 00:07:05.925 10:34:55 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:05.925 10:34:55 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:05.925 10:34:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:05.925 + '[' 2 -ne 2 ']' 00:07:05.925 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:05.925 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:05.925 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:05.925 +++ basename /dev/fd/62 00:07:05.925 ++ mktemp /tmp/62.XXX 00:07:05.925 + tmp_file_1=/tmp/62.Qjm 00:07:05.925 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:05.925 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:05.925 + tmp_file_2=/tmp/spdk_tgt_config.json.JLr 00:07:05.925 + ret=0 00:07:05.925 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:05.925 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:05.925 + diff -u /tmp/62.Qjm /tmp/spdk_tgt_config.json.JLr 00:07:05.925 + echo 'INFO: JSON config files are the same' 00:07:05.925 INFO: JSON config files are the same 00:07:05.925 + rm /tmp/62.Qjm /tmp/spdk_tgt_config.json.JLr 00:07:05.925 + exit 0 00:07:05.925 10:34:55 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:05.925 10:34:55 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:05.925 INFO: changing configuration and checking if this can be detected... 00:07:05.925 10:34:55 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:05.925 10:34:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:06.184 10:34:55 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:06.184 10:34:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:06.184 10:34:55 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:06.184 + '[' 2 -ne 2 ']' 00:07:06.184 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:06.184 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:06.184 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:06.184 +++ basename /dev/fd/62 00:07:06.184 ++ mktemp /tmp/62.XXX 00:07:06.184 + tmp_file_1=/tmp/62.oE0 00:07:06.184 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:06.184 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:06.184 + tmp_file_2=/tmp/spdk_tgt_config.json.kp0 00:07:06.184 + ret=0 00:07:06.184 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:06.443 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:06.703 + diff -u /tmp/62.oE0 /tmp/spdk_tgt_config.json.kp0 00:07:06.703 + ret=1 00:07:06.703 + echo '=== Start of file: /tmp/62.oE0 ===' 00:07:06.703 + cat /tmp/62.oE0 00:07:06.703 + echo '=== End of file: /tmp/62.oE0 ===' 00:07:06.703 + echo '' 00:07:06.703 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kp0 ===' 00:07:06.703 + cat /tmp/spdk_tgt_config.json.kp0 00:07:06.703 + echo '=== End of file: /tmp/spdk_tgt_config.json.kp0 ===' 00:07:06.703 + echo '' 00:07:06.703 + rm /tmp/62.oE0 /tmp/spdk_tgt_config.json.kp0 00:07:06.703 + exit 1 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:06.703 INFO: configuration change detected. 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@324 -- # [[ -n 3738255 ]] 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.703 10:34:56 json_config -- json_config/json_config.sh@330 -- # killprocess 3738255 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@954 -- # '[' -z 3738255 ']' 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@958 -- # kill -0 3738255 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@959 -- # uname 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3738255 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3738255' 00:07:06.703 killing process with pid 3738255 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@973 -- # kill 3738255 00:07:06.703 10:34:56 json_config -- common/autotest_common.sh@978 -- # wait 3738255 00:07:08.607 10:34:58 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:08.607 10:34:58 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:08.607 10:34:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:08.607 10:34:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.867 10:34:58 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:08.867 10:34:58 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:08.867 INFO: Success 00:07:08.867 00:07:08.867 real 0m15.947s 00:07:08.867 user 0m16.431s 00:07:08.867 sys 0m2.369s 00:07:08.867 10:34:58 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.867 10:34:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.867 ************************************ 00:07:08.867 END TEST json_config 00:07:08.867 ************************************ 00:07:08.867 10:34:58 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:08.867 10:34:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.867 10:34:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.867 10:34:58 -- common/autotest_common.sh@10 -- # set +x 00:07:08.867 ************************************ 00:07:08.867 START TEST json_config_extra_key 00:07:08.867 ************************************ 00:07:08.867 10:34:58 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:08.867 10:34:58 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:08.867 10:34:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:08.867 10:34:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:08.867 10:34:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:08.867 10:34:58 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.867 10:34:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:08.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.867 --rc genhtml_branch_coverage=1 00:07:08.867 --rc genhtml_function_coverage=1 00:07:08.867 --rc genhtml_legend=1 00:07:08.867 --rc geninfo_all_blocks=1 00:07:08.867 --rc geninfo_unexecuted_blocks=1 00:07:08.867 00:07:08.867 ' 00:07:08.867 10:34:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:08.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.867 --rc genhtml_branch_coverage=1 00:07:08.867 --rc genhtml_function_coverage=1 00:07:08.867 --rc genhtml_legend=1 00:07:08.867 --rc geninfo_all_blocks=1 00:07:08.867 --rc geninfo_unexecuted_blocks=1 00:07:08.867 00:07:08.867 ' 00:07:08.867 10:34:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:08.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.867 --rc genhtml_branch_coverage=1 00:07:08.867 --rc genhtml_function_coverage=1 00:07:08.867 --rc genhtml_legend=1 00:07:08.867 --rc geninfo_all_blocks=1 00:07:08.867 --rc geninfo_unexecuted_blocks=1 00:07:08.867 00:07:08.867 ' 00:07:08.867 10:34:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:08.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.867 --rc genhtml_branch_coverage=1 00:07:08.867 --rc genhtml_function_coverage=1 00:07:08.867 --rc genhtml_legend=1 00:07:08.867 --rc geninfo_all_blocks=1 00:07:08.867 --rc geninfo_unexecuted_blocks=1 00:07:08.867 00:07:08.867 ' 00:07:08.867 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.867 10:34:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.867 10:34:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.867 10:34:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.867 10:34:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.867 10:34:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:08.867 10:34:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.867 10:34:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.868 10:34:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.868 10:34:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.868 10:34:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.868 10:34:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:09.127 INFO: launching applications... 00:07:09.127 10:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:09.127 10:34:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:09.127 10:34:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:09.127 10:34:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:09.127 10:34:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:09.127 10:34:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:09.127 10:34:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:09.127 10:34:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:09.127 10:34:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3739531 00:07:09.127 10:34:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:09.127 Waiting for target to run... 00:07:09.127 10:34:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3739531 /var/tmp/spdk_tgt.sock 00:07:09.127 10:34:58 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3739531 ']' 00:07:09.127 10:34:58 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:09.127 10:34:58 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:09.127 10:34:58 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.127 10:34:58 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:09.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:09.127 10:34:58 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.127 10:34:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:09.127 [2024-11-19 10:34:58.712887] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:09.127 [2024-11-19 10:34:58.712932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739531 ] 00:07:09.387 [2024-11-19 10:34:58.988411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.387 [2024-11-19 10:34:59.022447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.953 10:34:59 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.953 10:34:59 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:09.953 10:34:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:09.953 00:07:09.953 10:34:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:09.953 INFO: shutting down applications... 00:07:09.953 10:34:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:09.953 10:34:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:09.953 10:34:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:09.953 10:34:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3739531 ]] 00:07:09.953 10:34:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3739531 00:07:09.953 10:34:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:09.953 10:34:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:09.953 10:34:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3739531 00:07:09.953 10:34:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:10.521 10:35:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:10.522 10:35:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:10.522 10:35:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3739531 00:07:10.522 10:35:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:10.522 10:35:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:10.522 10:35:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:10.522 10:35:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:10.522 SPDK target shutdown done 00:07:10.522 10:35:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:10.522 Success 00:07:10.522 00:07:10.522 real 0m1.556s 00:07:10.522 user 0m1.328s 00:07:10.522 sys 0m0.397s 00:07:10.522 10:35:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.522 10:35:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:10.522 ************************************ 00:07:10.522 END TEST json_config_extra_key 00:07:10.522 ************************************ 00:07:10.522 10:35:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:10.522 10:35:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.522 10:35:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.522 10:35:00 -- common/autotest_common.sh@10 -- # set +x 00:07:10.522 ************************************ 00:07:10.522 START TEST alias_rpc 00:07:10.522 ************************************ 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:10.522 * Looking for test storage... 00:07:10.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.522 10:35:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.522 --rc genhtml_branch_coverage=1 00:07:10.522 --rc genhtml_function_coverage=1 00:07:10.522 --rc genhtml_legend=1 00:07:10.522 --rc geninfo_all_blocks=1 00:07:10.522 --rc geninfo_unexecuted_blocks=1 00:07:10.522 00:07:10.522 ' 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.522 --rc genhtml_branch_coverage=1 00:07:10.522 --rc genhtml_function_coverage=1 00:07:10.522 --rc genhtml_legend=1 00:07:10.522 --rc geninfo_all_blocks=1 00:07:10.522 --rc geninfo_unexecuted_blocks=1 00:07:10.522 00:07:10.522 ' 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.522 --rc genhtml_branch_coverage=1 00:07:10.522 --rc genhtml_function_coverage=1 00:07:10.522 --rc genhtml_legend=1 00:07:10.522 --rc geninfo_all_blocks=1 00:07:10.522 --rc geninfo_unexecuted_blocks=1 00:07:10.522 00:07:10.522 ' 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.522 --rc genhtml_branch_coverage=1 00:07:10.522 --rc genhtml_function_coverage=1 00:07:10.522 --rc genhtml_legend=1 00:07:10.522 --rc geninfo_all_blocks=1 00:07:10.522 --rc geninfo_unexecuted_blocks=1 00:07:10.522 00:07:10.522 ' 00:07:10.522 10:35:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:10.522 10:35:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3739817 00:07:10.522 10:35:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3739817 00:07:10.522 10:35:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3739817 ']' 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.522 10:35:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.782 [2024-11-19 10:35:00.328509] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:10.782 [2024-11-19 10:35:00.328555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739817 ] 00:07:10.782 [2024-11-19 10:35:00.403781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.782 [2024-11-19 10:35:00.447106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.040 10:35:00 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.040 10:35:00 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:11.040 10:35:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:11.300 10:35:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3739817 00:07:11.300 10:35:00 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3739817 ']' 00:07:11.300 10:35:00 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3739817 00:07:11.300 10:35:00 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:11.300 10:35:00 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.300 10:35:00 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3739817 00:07:11.300 10:35:00 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.300 10:35:00 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.300 10:35:00 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3739817' 00:07:11.300 killing process with pid 3739817 00:07:11.300 10:35:00 alias_rpc -- common/autotest_common.sh@973 -- # kill 3739817 00:07:11.300 10:35:00 alias_rpc -- common/autotest_common.sh@978 -- # wait 3739817 00:07:11.560 00:07:11.560 real 0m1.147s 00:07:11.560 user 0m1.156s 00:07:11.560 sys 0m0.418s 00:07:11.560 10:35:01 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.560 10:35:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.560 ************************************ 00:07:11.560 END TEST alias_rpc 00:07:11.560 ************************************ 00:07:11.560 10:35:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:11.560 10:35:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:11.560 10:35:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.560 10:35:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.560 10:35:01 -- common/autotest_common.sh@10 -- # set +x 00:07:11.560 ************************************ 00:07:11.560 START TEST spdkcli_tcp 00:07:11.560 ************************************ 00:07:11.560 10:35:01 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:11.819 * Looking for test storage... 00:07:11.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:11.819 10:35:01 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.819 10:35:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.819 10:35:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.819 10:35:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.819 10:35:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.819 10:35:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.819 10:35:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.819 10:35:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.819 10:35:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.819 10:35:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.819 10:35:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.819 10:35:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.819 10:35:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.820 10:35:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.820 --rc genhtml_branch_coverage=1 00:07:11.820 --rc genhtml_function_coverage=1 00:07:11.820 --rc genhtml_legend=1 00:07:11.820 --rc geninfo_all_blocks=1 00:07:11.820 --rc geninfo_unexecuted_blocks=1 00:07:11.820 00:07:11.820 ' 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.820 --rc genhtml_branch_coverage=1 00:07:11.820 --rc genhtml_function_coverage=1 00:07:11.820 --rc genhtml_legend=1 00:07:11.820 --rc geninfo_all_blocks=1 00:07:11.820 --rc geninfo_unexecuted_blocks=1 00:07:11.820 00:07:11.820 ' 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.820 --rc genhtml_branch_coverage=1 00:07:11.820 --rc genhtml_function_coverage=1 00:07:11.820 --rc genhtml_legend=1 00:07:11.820 --rc geninfo_all_blocks=1 00:07:11.820 --rc geninfo_unexecuted_blocks=1 00:07:11.820 00:07:11.820 ' 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.820 --rc genhtml_branch_coverage=1 00:07:11.820 --rc genhtml_function_coverage=1 00:07:11.820 --rc genhtml_legend=1 00:07:11.820 --rc geninfo_all_blocks=1 00:07:11.820 --rc geninfo_unexecuted_blocks=1 00:07:11.820 00:07:11.820 ' 00:07:11.820 10:35:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:11.820 10:35:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:11.820 10:35:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:11.820 10:35:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:11.820 10:35:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:11.820 10:35:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:11.820 10:35:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:11.820 10:35:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3740104 00:07:11.820 10:35:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3740104 00:07:11.820 10:35:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3740104 ']' 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.820 10:35:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:11.820 [2024-11-19 10:35:01.550992] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:11.820 [2024-11-19 10:35:01.551035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740104 ] 00:07:12.079 [2024-11-19 10:35:01.624720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.079 [2024-11-19 10:35:01.667714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.079 [2024-11-19 10:35:01.667715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.338 10:35:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.338 10:35:01 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:12.338 10:35:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3740120 00:07:12.338 10:35:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:12.338 10:35:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:12.338 [ 00:07:12.338 "bdev_malloc_delete", 00:07:12.338 "bdev_malloc_create", 00:07:12.338 "bdev_null_resize", 00:07:12.338 "bdev_null_delete", 00:07:12.338 "bdev_null_create", 00:07:12.338 "bdev_nvme_cuse_unregister", 00:07:12.338 "bdev_nvme_cuse_register", 00:07:12.338 "bdev_opal_new_user", 00:07:12.338 "bdev_opal_set_lock_state", 00:07:12.338 "bdev_opal_delete", 00:07:12.338 "bdev_opal_get_info", 00:07:12.338 "bdev_opal_create", 00:07:12.338 "bdev_nvme_opal_revert", 00:07:12.338 "bdev_nvme_opal_init", 00:07:12.338 "bdev_nvme_send_cmd", 00:07:12.338 "bdev_nvme_set_keys", 00:07:12.338 "bdev_nvme_get_path_iostat", 00:07:12.338 "bdev_nvme_get_mdns_discovery_info", 00:07:12.338 "bdev_nvme_stop_mdns_discovery", 00:07:12.338 "bdev_nvme_start_mdns_discovery", 00:07:12.338 "bdev_nvme_set_multipath_policy", 00:07:12.338 "bdev_nvme_set_preferred_path", 00:07:12.338 "bdev_nvme_get_io_paths", 00:07:12.338 "bdev_nvme_remove_error_injection", 00:07:12.338 "bdev_nvme_add_error_injection", 00:07:12.338 "bdev_nvme_get_discovery_info", 00:07:12.338 "bdev_nvme_stop_discovery", 00:07:12.338 "bdev_nvme_start_discovery", 00:07:12.338 "bdev_nvme_get_controller_health_info", 00:07:12.338 "bdev_nvme_disable_controller", 00:07:12.338 "bdev_nvme_enable_controller", 00:07:12.338 "bdev_nvme_reset_controller", 00:07:12.338 "bdev_nvme_get_transport_statistics", 00:07:12.338 "bdev_nvme_apply_firmware", 00:07:12.338 "bdev_nvme_detach_controller", 00:07:12.338 "bdev_nvme_get_controllers", 00:07:12.338 "bdev_nvme_attach_controller", 00:07:12.338 "bdev_nvme_set_hotplug", 00:07:12.338 "bdev_nvme_set_options", 00:07:12.338 "bdev_passthru_delete", 00:07:12.338 "bdev_passthru_create", 00:07:12.338 "bdev_lvol_set_parent_bdev", 00:07:12.338 "bdev_lvol_set_parent", 00:07:12.338 "bdev_lvol_check_shallow_copy", 00:07:12.338 "bdev_lvol_start_shallow_copy", 00:07:12.338 "bdev_lvol_grow_lvstore", 00:07:12.338 "bdev_lvol_get_lvols", 00:07:12.338 "bdev_lvol_get_lvstores", 00:07:12.338 "bdev_lvol_delete", 00:07:12.338 "bdev_lvol_set_read_only", 00:07:12.338 "bdev_lvol_resize", 00:07:12.338 "bdev_lvol_decouple_parent", 00:07:12.338 "bdev_lvol_inflate", 00:07:12.338 "bdev_lvol_rename", 00:07:12.339 "bdev_lvol_clone_bdev", 00:07:12.339 "bdev_lvol_clone", 00:07:12.339 "bdev_lvol_snapshot", 00:07:12.339 "bdev_lvol_create", 00:07:12.339 "bdev_lvol_delete_lvstore", 00:07:12.339 "bdev_lvol_rename_lvstore", 00:07:12.339 "bdev_lvol_create_lvstore", 00:07:12.339 "bdev_raid_set_options", 00:07:12.339 "bdev_raid_remove_base_bdev", 00:07:12.339 "bdev_raid_add_base_bdev", 00:07:12.339 "bdev_raid_delete", 00:07:12.339 "bdev_raid_create", 00:07:12.339 "bdev_raid_get_bdevs", 00:07:12.339 "bdev_error_inject_error", 00:07:12.339 "bdev_error_delete", 00:07:12.339 "bdev_error_create", 00:07:12.339 "bdev_split_delete", 00:07:12.339 "bdev_split_create", 00:07:12.339 "bdev_delay_delete", 00:07:12.339 "bdev_delay_create", 00:07:12.339 "bdev_delay_update_latency", 00:07:12.339 "bdev_zone_block_delete", 00:07:12.339 "bdev_zone_block_create", 00:07:12.339 "blobfs_create", 00:07:12.339 "blobfs_detect", 00:07:12.339 "blobfs_set_cache_size", 00:07:12.339 "bdev_aio_delete", 00:07:12.339 "bdev_aio_rescan", 00:07:12.339 "bdev_aio_create", 00:07:12.339 "bdev_ftl_set_property", 00:07:12.339 "bdev_ftl_get_properties", 00:07:12.339 "bdev_ftl_get_stats", 00:07:12.339 "bdev_ftl_unmap", 00:07:12.339 "bdev_ftl_unload", 00:07:12.339 "bdev_ftl_delete", 00:07:12.339 "bdev_ftl_load", 00:07:12.339 "bdev_ftl_create", 00:07:12.339 "bdev_virtio_attach_controller", 00:07:12.339 "bdev_virtio_scsi_get_devices", 00:07:12.339 "bdev_virtio_detach_controller", 00:07:12.339 "bdev_virtio_blk_set_hotplug", 00:07:12.339 "bdev_iscsi_delete", 00:07:12.339 "bdev_iscsi_create", 00:07:12.339 "bdev_iscsi_set_options", 00:07:12.339 "accel_error_inject_error", 00:07:12.339 "ioat_scan_accel_module", 00:07:12.339 "dsa_scan_accel_module", 00:07:12.339 "iaa_scan_accel_module", 00:07:12.339 "vfu_virtio_create_fs_endpoint", 00:07:12.339 "vfu_virtio_create_scsi_endpoint", 00:07:12.339 "vfu_virtio_scsi_remove_target", 00:07:12.339 "vfu_virtio_scsi_add_target", 00:07:12.339 "vfu_virtio_create_blk_endpoint", 00:07:12.339 "vfu_virtio_delete_endpoint", 00:07:12.339 "keyring_file_remove_key", 00:07:12.339 "keyring_file_add_key", 00:07:12.339 "keyring_linux_set_options", 00:07:12.339 "fsdev_aio_delete", 00:07:12.339 "fsdev_aio_create", 00:07:12.339 "iscsi_get_histogram", 00:07:12.339 "iscsi_enable_histogram", 00:07:12.339 "iscsi_set_options", 00:07:12.339 "iscsi_get_auth_groups", 00:07:12.339 "iscsi_auth_group_remove_secret", 00:07:12.339 "iscsi_auth_group_add_secret", 00:07:12.339 "iscsi_delete_auth_group", 00:07:12.339 "iscsi_create_auth_group", 00:07:12.339 "iscsi_set_discovery_auth", 00:07:12.339 "iscsi_get_options", 00:07:12.339 "iscsi_target_node_request_logout", 00:07:12.339 "iscsi_target_node_set_redirect", 00:07:12.339 "iscsi_target_node_set_auth", 00:07:12.339 "iscsi_target_node_add_lun", 00:07:12.339 "iscsi_get_stats", 00:07:12.339 "iscsi_get_connections", 00:07:12.339 "iscsi_portal_group_set_auth", 00:07:12.339 "iscsi_start_portal_group", 00:07:12.339 "iscsi_delete_portal_group", 00:07:12.339 "iscsi_create_portal_group", 00:07:12.339 "iscsi_get_portal_groups", 00:07:12.339 "iscsi_delete_target_node", 00:07:12.339 "iscsi_target_node_remove_pg_ig_maps", 00:07:12.339 "iscsi_target_node_add_pg_ig_maps", 00:07:12.339 "iscsi_create_target_node", 00:07:12.339 "iscsi_get_target_nodes", 00:07:12.339 "iscsi_delete_initiator_group", 00:07:12.339 "iscsi_initiator_group_remove_initiators", 00:07:12.339 "iscsi_initiator_group_add_initiators", 00:07:12.339 "iscsi_create_initiator_group", 00:07:12.339 "iscsi_get_initiator_groups", 00:07:12.339 "nvmf_set_crdt", 00:07:12.339 "nvmf_set_config", 00:07:12.339 "nvmf_set_max_subsystems", 00:07:12.339 "nvmf_stop_mdns_prr", 00:07:12.339 "nvmf_publish_mdns_prr", 00:07:12.339 "nvmf_subsystem_get_listeners", 00:07:12.339 "nvmf_subsystem_get_qpairs", 00:07:12.339 "nvmf_subsystem_get_controllers", 00:07:12.339 "nvmf_get_stats", 00:07:12.339 "nvmf_get_transports", 00:07:12.339 "nvmf_create_transport", 00:07:12.339 "nvmf_get_targets", 00:07:12.339 "nvmf_delete_target", 00:07:12.339 "nvmf_create_target", 00:07:12.339 "nvmf_subsystem_allow_any_host", 00:07:12.339 "nvmf_subsystem_set_keys", 00:07:12.339 "nvmf_subsystem_remove_host", 00:07:12.339 "nvmf_subsystem_add_host", 00:07:12.339 "nvmf_ns_remove_host", 00:07:12.339 "nvmf_ns_add_host", 00:07:12.339 "nvmf_subsystem_remove_ns", 00:07:12.339 "nvmf_subsystem_set_ns_ana_group", 00:07:12.339 "nvmf_subsystem_add_ns", 00:07:12.339 "nvmf_subsystem_listener_set_ana_state", 00:07:12.339 "nvmf_discovery_get_referrals", 00:07:12.339 "nvmf_discovery_remove_referral", 00:07:12.339 "nvmf_discovery_add_referral", 00:07:12.339 "nvmf_subsystem_remove_listener", 00:07:12.339 "nvmf_subsystem_add_listener", 00:07:12.339 "nvmf_delete_subsystem", 00:07:12.339 "nvmf_create_subsystem", 00:07:12.339 "nvmf_get_subsystems", 00:07:12.339 "env_dpdk_get_mem_stats", 00:07:12.339 "nbd_get_disks", 00:07:12.339 "nbd_stop_disk", 00:07:12.339 "nbd_start_disk", 00:07:12.339 "ublk_recover_disk", 00:07:12.339 "ublk_get_disks", 00:07:12.339 "ublk_stop_disk", 00:07:12.339 "ublk_start_disk", 00:07:12.339 "ublk_destroy_target", 00:07:12.339 "ublk_create_target", 00:07:12.339 "virtio_blk_create_transport", 00:07:12.339 "virtio_blk_get_transports", 00:07:12.339 "vhost_controller_set_coalescing", 00:07:12.339 "vhost_get_controllers", 00:07:12.339 "vhost_delete_controller", 00:07:12.339 "vhost_create_blk_controller", 00:07:12.339 "vhost_scsi_controller_remove_target", 00:07:12.339 "vhost_scsi_controller_add_target", 00:07:12.339 "vhost_start_scsi_controller", 00:07:12.339 "vhost_create_scsi_controller", 00:07:12.339 "thread_set_cpumask", 00:07:12.339 "scheduler_set_options", 00:07:12.339 "framework_get_governor", 00:07:12.339 "framework_get_scheduler", 00:07:12.339 "framework_set_scheduler", 00:07:12.339 "framework_get_reactors", 00:07:12.339 "thread_get_io_channels", 00:07:12.339 "thread_get_pollers", 00:07:12.339 "thread_get_stats", 00:07:12.339 "framework_monitor_context_switch", 00:07:12.339 "spdk_kill_instance", 00:07:12.339 "log_enable_timestamps", 00:07:12.339 "log_get_flags", 00:07:12.339 "log_clear_flag", 00:07:12.339 "log_set_flag", 00:07:12.339 "log_get_level", 00:07:12.339 "log_set_level", 00:07:12.339 "log_get_print_level", 00:07:12.339 "log_set_print_level", 00:07:12.339 "framework_enable_cpumask_locks", 00:07:12.339 "framework_disable_cpumask_locks", 00:07:12.339 "framework_wait_init", 00:07:12.339 "framework_start_init", 00:07:12.339 "scsi_get_devices", 00:07:12.340 "bdev_get_histogram", 00:07:12.340 "bdev_enable_histogram", 00:07:12.340 "bdev_set_qos_limit", 00:07:12.340 "bdev_set_qd_sampling_period", 00:07:12.340 "bdev_get_bdevs", 00:07:12.340 "bdev_reset_iostat", 00:07:12.340 "bdev_get_iostat", 00:07:12.340 "bdev_examine", 00:07:12.340 "bdev_wait_for_examine", 00:07:12.340 "bdev_set_options", 00:07:12.340 "accel_get_stats", 00:07:12.340 "accel_set_options", 00:07:12.340 "accel_set_driver", 00:07:12.340 "accel_crypto_key_destroy", 00:07:12.340 "accel_crypto_keys_get", 00:07:12.340 "accel_crypto_key_create", 00:07:12.340 "accel_assign_opc", 00:07:12.340 "accel_get_module_info", 00:07:12.340 "accel_get_opc_assignments", 00:07:12.340 "vmd_rescan", 00:07:12.340 "vmd_remove_device", 00:07:12.340 "vmd_enable", 00:07:12.340 "sock_get_default_impl", 00:07:12.340 "sock_set_default_impl", 00:07:12.340 "sock_impl_set_options", 00:07:12.340 "sock_impl_get_options", 00:07:12.340 "iobuf_get_stats", 00:07:12.340 "iobuf_set_options", 00:07:12.340 "keyring_get_keys", 00:07:12.340 "vfu_tgt_set_base_path", 00:07:12.340 "framework_get_pci_devices", 00:07:12.340 "framework_get_config", 00:07:12.340 "framework_get_subsystems", 00:07:12.340 "fsdev_set_opts", 00:07:12.340 "fsdev_get_opts", 00:07:12.340 "trace_get_info", 00:07:12.340 "trace_get_tpoint_group_mask", 00:07:12.340 "trace_disable_tpoint_group", 00:07:12.340 "trace_enable_tpoint_group", 00:07:12.340 "trace_clear_tpoint_mask", 00:07:12.340 "trace_set_tpoint_mask", 00:07:12.340 "notify_get_notifications", 00:07:12.340 "notify_get_types", 00:07:12.340 "spdk_get_version", 00:07:12.340 "rpc_get_methods" 00:07:12.340 ] 00:07:12.340 10:35:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:12.340 10:35:02 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:12.340 10:35:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.340 10:35:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:12.340 10:35:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3740104 00:07:12.340 10:35:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3740104 ']' 00:07:12.340 10:35:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3740104 00:07:12.340 10:35:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:12.340 10:35:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.340 10:35:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3740104 00:07:12.599 10:35:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.599 10:35:02 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.599 10:35:02 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3740104' 00:07:12.599 killing process with pid 3740104 00:07:12.599 10:35:02 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3740104 00:07:12.599 10:35:02 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3740104 00:07:12.858 00:07:12.858 real 0m1.135s 00:07:12.858 user 0m1.891s 00:07:12.858 sys 0m0.456s 00:07:12.858 10:35:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.858 10:35:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.858 ************************************ 00:07:12.858 END TEST spdkcli_tcp 00:07:12.858 ************************************ 00:07:12.858 10:35:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:12.858 10:35:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.858 10:35:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.858 10:35:02 -- common/autotest_common.sh@10 -- # set +x 00:07:12.858 ************************************ 00:07:12.858 START TEST dpdk_mem_utility 00:07:12.858 ************************************ 00:07:12.858 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:12.858 * Looking for test storage... 00:07:12.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:12.858 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:12.858 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:12.858 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.118 10:35:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.118 --rc genhtml_branch_coverage=1 00:07:13.118 --rc genhtml_function_coverage=1 00:07:13.118 --rc genhtml_legend=1 00:07:13.118 --rc geninfo_all_blocks=1 00:07:13.118 --rc geninfo_unexecuted_blocks=1 00:07:13.118 00:07:13.118 ' 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.118 --rc genhtml_branch_coverage=1 00:07:13.118 --rc genhtml_function_coverage=1 00:07:13.118 --rc genhtml_legend=1 00:07:13.118 --rc geninfo_all_blocks=1 00:07:13.118 --rc geninfo_unexecuted_blocks=1 00:07:13.118 00:07:13.118 ' 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.118 --rc genhtml_branch_coverage=1 00:07:13.118 --rc genhtml_function_coverage=1 00:07:13.118 --rc genhtml_legend=1 00:07:13.118 --rc geninfo_all_blocks=1 00:07:13.118 --rc geninfo_unexecuted_blocks=1 00:07:13.118 00:07:13.118 ' 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.118 --rc genhtml_branch_coverage=1 00:07:13.118 --rc genhtml_function_coverage=1 00:07:13.118 --rc genhtml_legend=1 00:07:13.118 --rc geninfo_all_blocks=1 00:07:13.118 --rc geninfo_unexecuted_blocks=1 00:07:13.118 00:07:13.118 ' 00:07:13.118 10:35:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:13.118 10:35:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3740414 00:07:13.118 10:35:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:13.118 10:35:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3740414 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3740414 ']' 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.118 10:35:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:13.118 [2024-11-19 10:35:02.745728] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:13.118 [2024-11-19 10:35:02.745773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740414 ] 00:07:13.118 [2024-11-19 10:35:02.802413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.118 [2024-11-19 10:35:02.845737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.378 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.378 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:13.378 10:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:13.378 10:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:13.378 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.378 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:13.378 { 00:07:13.378 "filename": "/tmp/spdk_mem_dump.txt" 00:07:13.378 } 00:07:13.378 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.378 10:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:13.378 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:13.378 1 heaps totaling size 810.000000 MiB 00:07:13.378 size: 810.000000 MiB heap id: 0 00:07:13.378 end heaps---------- 00:07:13.378 9 mempools totaling size 595.772034 MiB 00:07:13.378 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:13.378 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:13.378 size: 92.545471 MiB name: bdev_io_3740414 00:07:13.378 size: 50.003479 MiB name: msgpool_3740414 00:07:13.378 size: 36.509338 MiB name: fsdev_io_3740414 00:07:13.378 size: 21.763794 MiB name: PDU_Pool 00:07:13.378 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:13.378 size: 4.133484 MiB name: evtpool_3740414 00:07:13.378 size: 0.026123 MiB name: Session_Pool 00:07:13.378 end mempools------- 00:07:13.378 6 memzones totaling size 4.142822 MiB 00:07:13.378 size: 1.000366 MiB name: RG_ring_0_3740414 00:07:13.378 size: 1.000366 MiB name: RG_ring_1_3740414 00:07:13.378 size: 1.000366 MiB name: RG_ring_4_3740414 00:07:13.378 size: 1.000366 MiB name: RG_ring_5_3740414 00:07:13.378 size: 0.125366 MiB name: RG_ring_2_3740414 00:07:13.378 size: 0.015991 MiB name: RG_ring_3_3740414 00:07:13.378 end memzones------- 00:07:13.378 10:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:13.378 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:13.378 list of free elements. size: 10.862488 MiB 00:07:13.378 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:13.378 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:13.378 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:13.378 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:13.378 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:13.378 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:13.378 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:13.378 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:13.378 element at address: 0x20001a600000 with size: 0.582886 MiB 00:07:13.378 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:13.378 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:13.378 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:13.378 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:13.378 element at address: 0x200027a00000 with size: 0.410034 MiB 00:07:13.378 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:13.378 list of standard malloc elements. size: 199.218628 MiB 00:07:13.378 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:13.378 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:13.378 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:13.378 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:13.379 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:13.379 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:13.379 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:13.379 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:13.379 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:13.379 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:13.379 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:13.379 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:13.379 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:13.379 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:13.379 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:13.379 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:13.379 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:13.379 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:13.379 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:13.379 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:13.379 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:13.379 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:13.379 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:13.379 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:13.379 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:13.379 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:13.379 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:13.379 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:13.379 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:13.379 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200027a69040 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:13.379 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:13.379 list of memzone associated elements. size: 599.918884 MiB 00:07:13.379 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:13.379 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:13.379 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:13.379 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:13.379 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:13.379 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3740414_0 00:07:13.379 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:13.379 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3740414_0 00:07:13.379 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:13.379 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3740414_0 00:07:13.379 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:13.379 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:13.379 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:13.379 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:13.379 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:13.379 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3740414_0 00:07:13.379 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:13.379 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3740414 00:07:13.379 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:13.379 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3740414 00:07:13.379 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:13.379 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:13.379 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:13.379 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:13.379 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:13.379 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:13.379 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:13.379 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:13.379 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:13.379 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3740414 00:07:13.379 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:13.379 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3740414 00:07:13.379 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:13.379 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3740414 00:07:13.379 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:13.379 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3740414 00:07:13.379 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:13.379 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3740414 00:07:13.379 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:13.379 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3740414 00:07:13.379 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:13.379 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:13.379 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:13.379 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:13.379 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:13.379 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:13.379 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:13.379 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3740414 00:07:13.379 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:13.379 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3740414 00:07:13.379 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:13.379 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:13.379 element at address: 0x200027a69100 with size: 0.023743 MiB 00:07:13.379 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:13.379 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:13.379 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3740414 00:07:13.379 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:07:13.379 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:13.379 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:13.379 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3740414 00:07:13.379 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:13.379 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3740414 00:07:13.379 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:13.379 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3740414 00:07:13.379 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:07:13.379 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:13.379 10:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:13.379 10:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3740414 00:07:13.379 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3740414 ']' 00:07:13.379 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3740414 00:07:13.379 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:13.639 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.639 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3740414 00:07:13.639 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.639 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.639 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3740414' 00:07:13.639 killing process with pid 3740414 00:07:13.639 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3740414 00:07:13.639 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3740414 00:07:13.898 00:07:13.898 real 0m0.987s 00:07:13.898 user 0m0.968s 00:07:13.898 sys 0m0.381s 00:07:13.898 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.898 10:35:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:13.898 ************************************ 00:07:13.898 END TEST dpdk_mem_utility 00:07:13.898 ************************************ 00:07:13.898 10:35:03 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:13.898 10:35:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.898 10:35:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.898 10:35:03 -- common/autotest_common.sh@10 -- # set +x 00:07:13.898 ************************************ 00:07:13.898 START TEST event 00:07:13.898 ************************************ 00:07:13.898 10:35:03 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:13.898 * Looking for test storage... 00:07:13.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:13.898 10:35:03 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.898 10:35:03 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.898 10:35:03 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.157 10:35:03 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.157 10:35:03 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.157 10:35:03 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.157 10:35:03 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.157 10:35:03 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.157 10:35:03 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.157 10:35:03 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.157 10:35:03 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.157 10:35:03 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.157 10:35:03 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.157 10:35:03 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.157 10:35:03 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.157 10:35:03 event -- scripts/common.sh@344 -- # case "$op" in 00:07:14.157 10:35:03 event -- scripts/common.sh@345 -- # : 1 00:07:14.157 10:35:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.157 10:35:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.157 10:35:03 event -- scripts/common.sh@365 -- # decimal 1 00:07:14.157 10:35:03 event -- scripts/common.sh@353 -- # local d=1 00:07:14.157 10:35:03 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.157 10:35:03 event -- scripts/common.sh@355 -- # echo 1 00:07:14.157 10:35:03 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.157 10:35:03 event -- scripts/common.sh@366 -- # decimal 2 00:07:14.157 10:35:03 event -- scripts/common.sh@353 -- # local d=2 00:07:14.157 10:35:03 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.157 10:35:03 event -- scripts/common.sh@355 -- # echo 2 00:07:14.157 10:35:03 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.157 10:35:03 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.157 10:35:03 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.157 10:35:03 event -- scripts/common.sh@368 -- # return 0 00:07:14.157 10:35:03 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.157 10:35:03 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.157 --rc genhtml_branch_coverage=1 00:07:14.157 --rc genhtml_function_coverage=1 00:07:14.157 --rc genhtml_legend=1 00:07:14.157 --rc geninfo_all_blocks=1 00:07:14.157 --rc geninfo_unexecuted_blocks=1 00:07:14.157 00:07:14.157 ' 00:07:14.157 10:35:03 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.157 --rc genhtml_branch_coverage=1 00:07:14.157 --rc genhtml_function_coverage=1 00:07:14.157 --rc genhtml_legend=1 00:07:14.157 --rc geninfo_all_blocks=1 00:07:14.157 --rc geninfo_unexecuted_blocks=1 00:07:14.157 00:07:14.157 ' 00:07:14.157 10:35:03 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.157 --rc genhtml_branch_coverage=1 00:07:14.157 --rc genhtml_function_coverage=1 00:07:14.157 --rc genhtml_legend=1 00:07:14.157 --rc geninfo_all_blocks=1 00:07:14.157 --rc geninfo_unexecuted_blocks=1 00:07:14.157 00:07:14.157 ' 00:07:14.157 10:35:03 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.157 --rc genhtml_branch_coverage=1 00:07:14.157 --rc genhtml_function_coverage=1 00:07:14.157 --rc genhtml_legend=1 00:07:14.157 --rc geninfo_all_blocks=1 00:07:14.157 --rc geninfo_unexecuted_blocks=1 00:07:14.157 00:07:14.157 ' 00:07:14.157 10:35:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:14.157 10:35:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:14.157 10:35:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:14.157 10:35:03 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:14.157 10:35:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.157 10:35:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:14.157 ************************************ 00:07:14.157 START TEST event_perf 00:07:14.158 ************************************ 00:07:14.158 10:35:03 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:14.158 Running I/O for 1 seconds...[2024-11-19 10:35:03.805933] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:14.158 [2024-11-19 10:35:03.805996] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740704 ] 00:07:14.158 [2024-11-19 10:35:03.885572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.158 [2024-11-19 10:35:03.928734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.158 [2024-11-19 10:35:03.928844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.158 [2024-11-19 10:35:03.928951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.158 [2024-11-19 10:35:03.928952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.535 Running I/O for 1 seconds... 00:07:15.535 lcore 0: 205462 00:07:15.535 lcore 1: 205462 00:07:15.535 lcore 2: 205463 00:07:15.535 lcore 3: 205463 00:07:15.535 done. 00:07:15.535 00:07:15.535 real 0m1.185s 00:07:15.535 user 0m4.096s 00:07:15.535 sys 0m0.086s 00:07:15.535 10:35:04 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.535 10:35:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.535 ************************************ 00:07:15.535 END TEST event_perf 00:07:15.535 ************************************ 00:07:15.535 10:35:05 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:15.536 10:35:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:15.536 10:35:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.536 10:35:05 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.536 ************************************ 00:07:15.536 START TEST event_reactor 00:07:15.536 ************************************ 00:07:15.536 10:35:05 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:15.536 [2024-11-19 10:35:05.060881] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:15.536 [2024-11-19 10:35:05.060948] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740881 ] 00:07:15.536 [2024-11-19 10:35:05.138502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.536 [2024-11-19 10:35:05.178761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.473 test_start 00:07:16.473 oneshot 00:07:16.473 tick 100 00:07:16.473 tick 100 00:07:16.473 tick 250 00:07:16.473 tick 100 00:07:16.473 tick 100 00:07:16.473 tick 100 00:07:16.473 tick 250 00:07:16.473 tick 500 00:07:16.473 tick 100 00:07:16.473 tick 100 00:07:16.473 tick 250 00:07:16.473 tick 100 00:07:16.473 tick 100 00:07:16.473 test_end 00:07:16.473 00:07:16.473 real 0m1.175s 00:07:16.473 user 0m1.101s 00:07:16.473 sys 0m0.070s 00:07:16.473 10:35:06 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.473 10:35:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:16.473 ************************************ 00:07:16.473 END TEST event_reactor 00:07:16.473 ************************************ 00:07:16.473 10:35:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:16.473 10:35:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:16.473 10:35:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.473 10:35:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.732 ************************************ 00:07:16.732 START TEST event_reactor_perf 00:07:16.732 ************************************ 00:07:16.732 10:35:06 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:16.732 [2024-11-19 10:35:06.304466] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:16.732 [2024-11-19 10:35:06.304535] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3741044 ] 00:07:16.732 [2024-11-19 10:35:06.382446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.732 [2024-11-19 10:35:06.422356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.668 test_start 00:07:17.668 test_end 00:07:17.668 Performance: 522860 events per second 00:07:17.928 00:07:17.928 real 0m1.175s 00:07:17.928 user 0m1.097s 00:07:17.928 sys 0m0.074s 00:07:17.928 10:35:07 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.928 10:35:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:17.928 ************************************ 00:07:17.928 END TEST event_reactor_perf 00:07:17.928 ************************************ 00:07:17.928 10:35:07 event -- event/event.sh@49 -- # uname -s 00:07:17.928 10:35:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:17.928 10:35:07 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:17.928 10:35:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.928 10:35:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.928 10:35:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.928 ************************************ 00:07:17.928 START TEST event_scheduler 00:07:17.928 ************************************ 00:07:17.928 10:35:07 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:17.928 * Looking for test storage... 00:07:17.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:17.928 10:35:07 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:17.928 10:35:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:17.928 10:35:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:17.928 10:35:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.928 10:35:07 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:17.928 10:35:07 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.928 10:35:07 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:17.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.928 --rc genhtml_branch_coverage=1 00:07:17.928 --rc genhtml_function_coverage=1 00:07:17.928 --rc genhtml_legend=1 00:07:17.928 --rc geninfo_all_blocks=1 00:07:17.928 --rc geninfo_unexecuted_blocks=1 00:07:17.928 00:07:17.928 ' 00:07:17.929 10:35:07 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:17.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.929 --rc genhtml_branch_coverage=1 00:07:17.929 --rc genhtml_function_coverage=1 00:07:17.929 --rc genhtml_legend=1 00:07:17.929 --rc geninfo_all_blocks=1 00:07:17.929 --rc geninfo_unexecuted_blocks=1 00:07:17.929 00:07:17.929 ' 00:07:17.929 10:35:07 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:17.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.929 --rc genhtml_branch_coverage=1 00:07:17.929 --rc genhtml_function_coverage=1 00:07:17.929 --rc genhtml_legend=1 00:07:17.929 --rc geninfo_all_blocks=1 00:07:17.929 --rc geninfo_unexecuted_blocks=1 00:07:17.929 00:07:17.929 ' 00:07:17.929 10:35:07 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:17.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.929 --rc genhtml_branch_coverage=1 00:07:17.929 --rc genhtml_function_coverage=1 00:07:17.929 --rc genhtml_legend=1 00:07:17.929 --rc geninfo_all_blocks=1 00:07:17.929 --rc geninfo_unexecuted_blocks=1 00:07:17.929 00:07:17.929 ' 00:07:17.929 10:35:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:17.929 10:35:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3741361 00:07:17.929 10:35:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:17.929 10:35:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:17.929 10:35:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3741361 00:07:17.929 10:35:07 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3741361 ']' 00:07:17.929 10:35:07 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.929 10:35:07 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.929 10:35:07 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.929 10:35:07 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.929 10:35:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:18.188 [2024-11-19 10:35:07.754130] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:18.188 [2024-11-19 10:35:07.754181] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3741361 ] 00:07:18.188 [2024-11-19 10:35:07.830124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.188 [2024-11-19 10:35:07.873368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.188 [2024-11-19 10:35:07.873480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.188 [2024-11-19 10:35:07.873585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.188 [2024-11-19 10:35:07.873586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.188 10:35:07 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.188 10:35:07 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:18.188 10:35:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:18.188 10:35:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.188 10:35:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:18.188 [2024-11-19 10:35:07.926115] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:18.189 [2024-11-19 10:35:07.926132] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:18.189 [2024-11-19 10:35:07.926142] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:18.189 [2024-11-19 10:35:07.926148] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:18.189 [2024-11-19 10:35:07.926153] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:18.189 10:35:07 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.189 10:35:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:18.189 10:35:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.189 10:35:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 [2024-11-19 10:35:08.004039] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:18.449 10:35:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.449 10:35:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:18.449 10:35:08 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.449 10:35:08 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 ************************************ 00:07:18.449 START TEST scheduler_create_thread 00:07:18.449 ************************************ 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 2 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 3 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 4 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 5 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 6 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 7 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 8 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 9 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 10 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.449 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.017 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.017 10:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:19.017 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.017 10:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.525 10:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.525 10:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:20.525 10:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:20.525 10:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.525 10:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.461 10:35:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.461 00:07:21.461 real 0m3.099s 00:07:21.461 user 0m0.026s 00:07:21.461 sys 0m0.003s 00:07:21.461 10:35:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.461 10:35:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.461 ************************************ 00:07:21.461 END TEST scheduler_create_thread 00:07:21.461 ************************************ 00:07:21.461 10:35:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:21.461 10:35:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3741361 00:07:21.461 10:35:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3741361 ']' 00:07:21.461 10:35:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3741361 00:07:21.461 10:35:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:21.461 10:35:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.461 10:35:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3741361 00:07:21.461 10:35:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:21.461 10:35:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:21.461 10:35:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3741361' 00:07:21.461 killing process with pid 3741361 00:07:21.461 10:35:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3741361 00:07:21.461 10:35:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3741361 00:07:22.028 [2024-11-19 10:35:11.519488] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:22.028 00:07:22.028 real 0m4.171s 00:07:22.028 user 0m6.712s 00:07:22.028 sys 0m0.353s 00:07:22.028 10:35:11 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.028 10:35:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:22.028 ************************************ 00:07:22.028 END TEST event_scheduler 00:07:22.028 ************************************ 00:07:22.028 10:35:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:22.028 10:35:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:22.028 10:35:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.028 10:35:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.028 10:35:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.028 ************************************ 00:07:22.028 START TEST app_repeat 00:07:22.028 ************************************ 00:07:22.028 10:35:11 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3742065 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3742065' 00:07:22.028 Process app_repeat pid: 3742065 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:22.028 spdk_app_start Round 0 00:07:22.028 10:35:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3742065 /var/tmp/spdk-nbd.sock 00:07:22.028 10:35:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3742065 ']' 00:07:22.028 10:35:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.028 10:35:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.028 10:35:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.028 10:35:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.028 10:35:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.286 [2024-11-19 10:35:11.819765] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:22.286 [2024-11-19 10:35:11.819816] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3742065 ] 00:07:22.286 [2024-11-19 10:35:11.894891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.286 [2024-11-19 10:35:11.938814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.286 [2024-11-19 10:35:11.938816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.286 10:35:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.286 10:35:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:22.286 10:35:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.545 Malloc0 00:07:22.545 10:35:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.804 Malloc1 00:07:22.804 10:35:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.804 10:35:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:23.062 /dev/nbd0 00:07:23.062 10:35:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.062 10:35:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:23.062 1+0 records in 00:07:23.062 1+0 records out 00:07:23.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182253 s, 22.5 MB/s 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.062 10:35:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:23.062 10:35:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.062 10:35:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.062 10:35:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:23.321 /dev/nbd1 00:07:23.321 10:35:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:23.321 10:35:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:23.321 1+0 records in 00:07:23.321 1+0 records out 00:07:23.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204763 s, 20.0 MB/s 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.321 10:35:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:23.321 10:35:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.321 10:35:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.321 10:35:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.321 10:35:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.321 10:35:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:23.580 { 00:07:23.580 "nbd_device": "/dev/nbd0", 00:07:23.580 "bdev_name": "Malloc0" 00:07:23.580 }, 00:07:23.580 { 00:07:23.580 "nbd_device": "/dev/nbd1", 00:07:23.580 "bdev_name": "Malloc1" 00:07:23.580 } 00:07:23.580 ]' 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:23.580 { 00:07:23.580 "nbd_device": "/dev/nbd0", 00:07:23.580 "bdev_name": "Malloc0" 00:07:23.580 }, 00:07:23.580 { 00:07:23.580 "nbd_device": "/dev/nbd1", 00:07:23.580 "bdev_name": "Malloc1" 00:07:23.580 } 00:07:23.580 ]' 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:23.580 /dev/nbd1' 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:23.580 /dev/nbd1' 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:23.580 256+0 records in 00:07:23.580 256+0 records out 00:07:23.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106957 s, 98.0 MB/s 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:23.580 256+0 records in 00:07:23.580 256+0 records out 00:07:23.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137254 s, 76.4 MB/s 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:23.580 256+0 records in 00:07:23.580 256+0 records out 00:07:23.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148135 s, 70.8 MB/s 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.580 10:35:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.839 10:35:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.839 10:35:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.839 10:35:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.839 10:35:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.839 10:35:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.839 10:35:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.839 10:35:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.839 10:35:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.839 10:35:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.839 10:35:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.098 10:35:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.357 10:35:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.357 10:35:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.357 10:35:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.357 10:35:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:24.358 10:35:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.358 10:35:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.358 10:35:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:24.358 10:35:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:24.358 10:35:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:24.358 10:35:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:24.358 10:35:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:24.616 [2024-11-19 10:35:14.256270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.616 [2024-11-19 10:35:14.293055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.616 [2024-11-19 10:35:14.293056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.616 [2024-11-19 10:35:14.333737] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:24.616 [2024-11-19 10:35:14.333776] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:27.902 10:35:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:27.902 10:35:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:27.902 spdk_app_start Round 1 00:07:27.902 10:35:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3742065 /var/tmp/spdk-nbd.sock 00:07:27.902 10:35:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3742065 ']' 00:07:27.902 10:35:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:27.902 10:35:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.902 10:35:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:27.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:27.902 10:35:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.902 10:35:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:27.902 10:35:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.902 10:35:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:27.902 10:35:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:27.902 Malloc0 00:07:27.902 10:35:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:28.161 Malloc1 00:07:28.161 10:35:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:28.161 10:35:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:28.420 /dev/nbd0 00:07:28.420 10:35:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:28.420 10:35:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:28.420 10:35:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:28.420 10:35:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:28.420 10:35:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.420 10:35:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.420 10:35:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:28.420 10:35:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:28.420 10:35:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.420 10:35:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.420 10:35:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:28.420 1+0 records in 00:07:28.420 1+0 records out 00:07:28.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228409 s, 17.9 MB/s 00:07:28.420 10:35:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:28.420 10:35:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:28.420 10:35:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:28.420 10:35:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.420 10:35:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:28.420 10:35:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.420 10:35:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:28.420 10:35:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:28.420 /dev/nbd1 00:07:28.678 10:35:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:28.678 10:35:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:28.678 1+0 records in 00:07:28.678 1+0 records out 00:07:28.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236763 s, 17.3 MB/s 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.678 10:35:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:28.678 10:35:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.679 10:35:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:28.679 10:35:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.679 10:35:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.679 10:35:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.679 10:35:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:28.679 { 00:07:28.679 "nbd_device": "/dev/nbd0", 00:07:28.679 "bdev_name": "Malloc0" 00:07:28.679 }, 00:07:28.679 { 00:07:28.679 "nbd_device": "/dev/nbd1", 00:07:28.679 "bdev_name": "Malloc1" 00:07:28.679 } 00:07:28.679 ]' 00:07:28.679 10:35:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:28.679 { 00:07:28.679 "nbd_device": "/dev/nbd0", 00:07:28.679 "bdev_name": "Malloc0" 00:07:28.679 }, 00:07:28.679 { 00:07:28.679 "nbd_device": "/dev/nbd1", 00:07:28.679 "bdev_name": "Malloc1" 00:07:28.679 } 00:07:28.679 ]' 00:07:28.679 10:35:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:28.937 /dev/nbd1' 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:28.937 /dev/nbd1' 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:28.937 256+0 records in 00:07:28.937 256+0 records out 00:07:28.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106609 s, 98.4 MB/s 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:28.937 256+0 records in 00:07:28.937 256+0 records out 00:07:28.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014165 s, 74.0 MB/s 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:28.937 256+0 records in 00:07:28.937 256+0 records out 00:07:28.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148909 s, 70.4 MB/s 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.937 10:35:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:29.195 10:35:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:29.195 10:35:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:29.195 10:35:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:29.195 10:35:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.195 10:35:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.195 10:35:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:29.195 10:35:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:29.195 10:35:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.195 10:35:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.195 10:35:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:29.195 10:35:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:29.454 10:35:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:29.454 10:35:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:29.454 10:35:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.454 10:35:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.454 10:35:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:29.454 10:35:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:29.454 10:35:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.454 10:35:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:29.454 10:35:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.454 10:35:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:29.454 10:35:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:29.454 10:35:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:29.712 10:35:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:29.970 [2024-11-19 10:35:19.584034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.970 [2024-11-19 10:35:19.620895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.970 [2024-11-19 10:35:19.620896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.970 [2024-11-19 10:35:19.662693] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:29.970 [2024-11-19 10:35:19.662732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:33.257 10:35:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:33.257 10:35:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:33.257 spdk_app_start Round 2 00:07:33.257 10:35:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3742065 /var/tmp/spdk-nbd.sock 00:07:33.258 10:35:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3742065 ']' 00:07:33.258 10:35:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:33.258 10:35:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.258 10:35:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:33.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:33.258 10:35:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.258 10:35:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:33.258 10:35:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.258 10:35:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:33.258 10:35:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:33.258 Malloc0 00:07:33.258 10:35:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:33.258 Malloc1 00:07:33.516 10:35:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:33.516 10:35:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.516 10:35:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.516 10:35:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:33.516 10:35:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.516 10:35:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:33.517 /dev/nbd0 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:33.517 10:35:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:33.517 1+0 records in 00:07:33.517 1+0 records out 00:07:33.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190514 s, 21.5 MB/s 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:33.517 10:35:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:33.776 10:35:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.776 10:35:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.776 10:35:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:33.776 /dev/nbd1 00:07:33.776 10:35:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:33.776 10:35:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:33.776 1+0 records in 00:07:33.776 1+0 records out 00:07:33.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247983 s, 16.5 MB/s 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:33.776 10:35:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:33.776 10:35:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.776 10:35:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.776 10:35:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.776 10:35:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.776 10:35:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:34.035 { 00:07:34.035 "nbd_device": "/dev/nbd0", 00:07:34.035 "bdev_name": "Malloc0" 00:07:34.035 }, 00:07:34.035 { 00:07:34.035 "nbd_device": "/dev/nbd1", 00:07:34.035 "bdev_name": "Malloc1" 00:07:34.035 } 00:07:34.035 ]' 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:34.035 { 00:07:34.035 "nbd_device": "/dev/nbd0", 00:07:34.035 "bdev_name": "Malloc0" 00:07:34.035 }, 00:07:34.035 { 00:07:34.035 "nbd_device": "/dev/nbd1", 00:07:34.035 "bdev_name": "Malloc1" 00:07:34.035 } 00:07:34.035 ]' 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:34.035 /dev/nbd1' 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:34.035 /dev/nbd1' 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:34.035 256+0 records in 00:07:34.035 256+0 records out 00:07:34.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106991 s, 98.0 MB/s 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.035 10:35:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:34.035 256+0 records in 00:07:34.035 256+0 records out 00:07:34.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141474 s, 74.1 MB/s 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:34.294 256+0 records in 00:07:34.294 256+0 records out 00:07:34.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145015 s, 72.3 MB/s 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.294 10:35:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:34.294 10:35:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:34.294 10:35:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:34.294 10:35:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:34.294 10:35:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.294 10:35:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.294 10:35:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:34.294 10:35:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:34.294 10:35:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.294 10:35:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.294 10:35:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:34.553 10:35:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:34.553 10:35:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:34.553 10:35:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:34.553 10:35:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.553 10:35:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.553 10:35:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:34.553 10:35:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:34.553 10:35:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.553 10:35:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.553 10:35:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.553 10:35:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:34.812 10:35:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:34.813 10:35:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:35.072 10:35:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:35.331 [2024-11-19 10:35:24.899113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:35.331 [2024-11-19 10:35:24.937075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.331 [2024-11-19 10:35:24.937076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.331 [2024-11-19 10:35:24.977779] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:35.331 [2024-11-19 10:35:24.977818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:38.620 10:35:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3742065 /var/tmp/spdk-nbd.sock 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3742065 ']' 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:38.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:38.620 10:35:27 event.app_repeat -- event/event.sh@39 -- # killprocess 3742065 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3742065 ']' 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3742065 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.620 10:35:27 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3742065 00:07:38.620 10:35:28 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.620 10:35:28 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.620 10:35:28 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3742065' 00:07:38.620 killing process with pid 3742065 00:07:38.620 10:35:28 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3742065 00:07:38.620 10:35:28 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3742065 00:07:38.620 spdk_app_start is called in Round 0. 00:07:38.620 Shutdown signal received, stop current app iteration 00:07:38.620 Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 reinitialization... 00:07:38.620 spdk_app_start is called in Round 1. 00:07:38.620 Shutdown signal received, stop current app iteration 00:07:38.620 Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 reinitialization... 00:07:38.620 spdk_app_start is called in Round 2. 00:07:38.620 Shutdown signal received, stop current app iteration 00:07:38.620 Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 reinitialization... 00:07:38.620 spdk_app_start is called in Round 3. 00:07:38.620 Shutdown signal received, stop current app iteration 00:07:38.620 10:35:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:38.620 10:35:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:38.620 00:07:38.620 real 0m16.360s 00:07:38.620 user 0m35.939s 00:07:38.620 sys 0m2.547s 00:07:38.620 10:35:28 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.620 10:35:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:38.620 ************************************ 00:07:38.620 END TEST app_repeat 00:07:38.620 ************************************ 00:07:38.620 10:35:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:38.620 10:35:28 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:38.620 10:35:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.620 10:35:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.620 10:35:28 event -- common/autotest_common.sh@10 -- # set +x 00:07:38.620 ************************************ 00:07:38.620 START TEST cpu_locks 00:07:38.620 ************************************ 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:38.620 * Looking for test storage... 00:07:38.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.620 10:35:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.620 --rc genhtml_branch_coverage=1 00:07:38.620 --rc genhtml_function_coverage=1 00:07:38.620 --rc genhtml_legend=1 00:07:38.620 --rc geninfo_all_blocks=1 00:07:38.620 --rc geninfo_unexecuted_blocks=1 00:07:38.620 00:07:38.620 ' 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.620 --rc genhtml_branch_coverage=1 00:07:38.620 --rc genhtml_function_coverage=1 00:07:38.620 --rc genhtml_legend=1 00:07:38.620 --rc geninfo_all_blocks=1 00:07:38.620 --rc geninfo_unexecuted_blocks=1 00:07:38.620 00:07:38.620 ' 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.620 --rc genhtml_branch_coverage=1 00:07:38.620 --rc genhtml_function_coverage=1 00:07:38.620 --rc genhtml_legend=1 00:07:38.620 --rc geninfo_all_blocks=1 00:07:38.620 --rc geninfo_unexecuted_blocks=1 00:07:38.620 00:07:38.620 ' 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.620 --rc genhtml_branch_coverage=1 00:07:38.620 --rc genhtml_function_coverage=1 00:07:38.620 --rc genhtml_legend=1 00:07:38.620 --rc geninfo_all_blocks=1 00:07:38.620 --rc geninfo_unexecuted_blocks=1 00:07:38.620 00:07:38.620 ' 00:07:38.620 10:35:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:38.620 10:35:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:38.620 10:35:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:38.620 10:35:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.620 10:35:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.880 ************************************ 00:07:38.880 START TEST default_locks 00:07:38.880 ************************************ 00:07:38.880 10:35:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:38.880 10:35:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3745161 00:07:38.880 10:35:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3745161 00:07:38.880 10:35:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:38.880 10:35:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3745161 ']' 00:07:38.880 10:35:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.880 10:35:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.880 10:35:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.880 10:35:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.880 10:35:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.880 [2024-11-19 10:35:28.481719] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:38.880 [2024-11-19 10:35:28.481763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745161 ] 00:07:38.880 [2024-11-19 10:35:28.555989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.880 [2024-11-19 10:35:28.595477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.139 10:35:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.139 10:35:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:39.139 10:35:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3745161 00:07:39.139 10:35:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:39.139 10:35:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3745161 00:07:39.706 lslocks: write error 00:07:39.706 10:35:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3745161 00:07:39.706 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3745161 ']' 00:07:39.706 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3745161 00:07:39.706 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:39.706 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.706 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3745161 00:07:39.706 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.706 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.706 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3745161' 00:07:39.706 killing process with pid 3745161 00:07:39.706 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3745161 00:07:39.706 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3745161 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3745161 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3745161 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3745161 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3745161 ']' 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3745161) - No such process 00:07:39.965 ERROR: process (pid: 3745161) is no longer running 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:39.965 10:35:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:39.966 10:35:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:39.966 00:07:39.966 real 0m1.226s 00:07:39.966 user 0m1.208s 00:07:39.966 sys 0m0.554s 00:07:39.966 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.966 10:35:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.966 ************************************ 00:07:39.966 END TEST default_locks 00:07:39.966 ************************************ 00:07:39.966 10:35:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:39.966 10:35:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.966 10:35:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.966 10:35:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.966 ************************************ 00:07:39.966 START TEST default_locks_via_rpc 00:07:39.966 ************************************ 00:07:39.966 10:35:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:39.966 10:35:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3745371 00:07:39.966 10:35:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3745371 00:07:39.966 10:35:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.966 10:35:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3745371 ']' 00:07:39.966 10:35:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.966 10:35:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.966 10:35:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.966 10:35:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.966 10:35:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.226 [2024-11-19 10:35:29.780009] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:40.226 [2024-11-19 10:35:29.780055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745371 ] 00:07:40.226 [2024-11-19 10:35:29.856452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.226 [2024-11-19 10:35:29.898210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3745371 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3745371 00:07:40.485 10:35:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.744 10:35:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3745371 00:07:40.745 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3745371 ']' 00:07:40.745 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3745371 00:07:40.745 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:40.745 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.745 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3745371 00:07:40.745 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.745 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.745 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3745371' 00:07:40.745 killing process with pid 3745371 00:07:40.745 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3745371 00:07:40.745 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3745371 00:07:41.004 00:07:41.004 real 0m0.953s 00:07:41.004 user 0m0.910s 00:07:41.004 sys 0m0.432s 00:07:41.004 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.004 10:35:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.004 ************************************ 00:07:41.004 END TEST default_locks_via_rpc 00:07:41.004 ************************************ 00:07:41.004 10:35:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:41.004 10:35:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.004 10:35:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.004 10:35:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.004 ************************************ 00:07:41.004 START TEST non_locking_app_on_locked_coremask 00:07:41.004 ************************************ 00:07:41.004 10:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:41.004 10:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3745540 00:07:41.004 10:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3745540 /var/tmp/spdk.sock 00:07:41.004 10:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:41.004 10:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3745540 ']' 00:07:41.004 10:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.005 10:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.005 10:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.005 10:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.005 10:35:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.264 [2024-11-19 10:35:30.800302] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:41.264 [2024-11-19 10:35:30.800343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745540 ] 00:07:41.264 [2024-11-19 10:35:30.874503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.264 [2024-11-19 10:35:30.916335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.523 10:35:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.523 10:35:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:41.523 10:35:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3745648 00:07:41.523 10:35:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3745648 /var/tmp/spdk2.sock 00:07:41.523 10:35:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:41.523 10:35:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3745648 ']' 00:07:41.523 10:35:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.523 10:35:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.523 10:35:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.523 10:35:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.523 10:35:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.523 [2024-11-19 10:35:31.193838] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:41.523 [2024-11-19 10:35:31.193886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745648 ] 00:07:41.523 [2024-11-19 10:35:31.281761] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.523 [2024-11-19 10:35:31.281789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.782 [2024-11-19 10:35:31.374729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.350 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.350 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:42.350 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3745540 00:07:42.350 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3745540 00:07:42.350 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.608 lslocks: write error 00:07:42.608 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3745540 00:07:42.608 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3745540 ']' 00:07:42.608 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3745540 00:07:42.608 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:42.608 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.608 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3745540 00:07:42.608 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.608 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.608 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3745540' 00:07:42.608 killing process with pid 3745540 00:07:42.608 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3745540 00:07:42.608 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3745540 00:07:43.547 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3745648 00:07:43.547 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3745648 ']' 00:07:43.547 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3745648 00:07:43.547 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:43.547 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.547 10:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3745648 00:07:43.547 10:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.547 10:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.547 10:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3745648' 00:07:43.547 killing process with pid 3745648 00:07:43.547 10:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3745648 00:07:43.547 10:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3745648 00:07:43.547 00:07:43.547 real 0m2.570s 00:07:43.547 user 0m2.705s 00:07:43.547 sys 0m0.815s 00:07:43.547 10:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.547 10:35:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.547 ************************************ 00:07:43.547 END TEST non_locking_app_on_locked_coremask 00:07:43.547 ************************************ 00:07:43.807 10:35:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:43.807 10:35:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.807 10:35:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.807 10:35:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.807 ************************************ 00:07:43.807 START TEST locking_app_on_unlocked_coremask 00:07:43.807 ************************************ 00:07:43.807 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:43.807 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3746036 00:07:43.807 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3746036 /var/tmp/spdk.sock 00:07:43.807 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:43.807 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3746036 ']' 00:07:43.807 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.807 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.807 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.807 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.807 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.807 [2024-11-19 10:35:33.439126] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:43.807 [2024-11-19 10:35:33.439171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746036 ] 00:07:43.807 [2024-11-19 10:35:33.512094] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:43.807 [2024-11-19 10:35:33.512119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.807 [2024-11-19 10:35:33.553962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.066 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.066 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:44.066 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3746058 00:07:44.066 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3746058 /var/tmp/spdk2.sock 00:07:44.066 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:44.066 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3746058 ']' 00:07:44.066 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.066 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.066 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.066 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.066 10:35:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.066 [2024-11-19 10:35:33.814344] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:44.066 [2024-11-19 10:35:33.814390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746058 ] 00:07:44.325 [2024-11-19 10:35:33.902783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.325 [2024-11-19 10:35:33.990878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.893 10:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.893 10:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:44.893 10:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3746058 00:07:44.893 10:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3746058 00:07:44.893 10:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.831 lslocks: write error 00:07:45.831 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3746036 00:07:45.831 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3746036 ']' 00:07:45.831 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3746036 00:07:45.831 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:45.831 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.831 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3746036 00:07:45.831 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.831 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.831 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3746036' 00:07:45.831 killing process with pid 3746036 00:07:45.831 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3746036 00:07:45.831 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3746036 00:07:46.399 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3746058 00:07:46.399 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3746058 ']' 00:07:46.399 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3746058 00:07:46.399 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:46.399 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.399 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3746058 00:07:46.399 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.399 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.399 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3746058' 00:07:46.399 killing process with pid 3746058 00:07:46.399 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3746058 00:07:46.399 10:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3746058 00:07:46.658 00:07:46.659 real 0m2.899s 00:07:46.659 user 0m3.040s 00:07:46.659 sys 0m0.983s 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.659 ************************************ 00:07:46.659 END TEST locking_app_on_unlocked_coremask 00:07:46.659 ************************************ 00:07:46.659 10:35:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:46.659 10:35:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.659 10:35:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.659 10:35:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.659 ************************************ 00:07:46.659 START TEST locking_app_on_locked_coremask 00:07:46.659 ************************************ 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3746533 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3746533 /var/tmp/spdk.sock 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3746533 ']' 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.659 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.659 [2024-11-19 10:35:36.405128] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:46.659 [2024-11-19 10:35:36.405179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746533 ] 00:07:46.917 [2024-11-19 10:35:36.459608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.917 [2024-11-19 10:35:36.500775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3746670 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3746670 /var/tmp/spdk2.sock 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3746670 /var/tmp/spdk2.sock 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3746670 /var/tmp/spdk2.sock 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3746670 ']' 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.176 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.177 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.177 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.177 10:35:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.177 [2024-11-19 10:35:36.767696] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:47.177 [2024-11-19 10:35:36.767749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746670 ] 00:07:47.177 [2024-11-19 10:35:36.862583] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3746533 has claimed it. 00:07:47.177 [2024-11-19 10:35:36.862624] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:47.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3746670) - No such process 00:07:47.744 ERROR: process (pid: 3746670) is no longer running 00:07:47.744 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.744 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:47.744 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:47.744 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:47.744 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:47.744 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:47.744 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3746533 00:07:47.744 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3746533 00:07:47.744 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:48.003 lslocks: write error 00:07:48.003 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3746533 00:07:48.003 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3746533 ']' 00:07:48.003 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3746533 00:07:48.003 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:48.003 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.003 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3746533 00:07:48.003 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.003 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.003 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3746533' 00:07:48.003 killing process with pid 3746533 00:07:48.003 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3746533 00:07:48.003 10:35:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3746533 00:07:48.262 00:07:48.262 real 0m1.657s 00:07:48.262 user 0m1.803s 00:07:48.262 sys 0m0.532s 00:07:48.262 10:35:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.262 10:35:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.262 ************************************ 00:07:48.262 END TEST locking_app_on_locked_coremask 00:07:48.262 ************************************ 00:07:48.262 10:35:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:48.262 10:35:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.262 10:35:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.262 10:35:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.522 ************************************ 00:07:48.522 START TEST locking_overlapped_coremask 00:07:48.522 ************************************ 00:07:48.522 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:48.522 10:35:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3746920 00:07:48.522 10:35:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3746920 /var/tmp/spdk.sock 00:07:48.522 10:35:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:48.522 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3746920 ']' 00:07:48.522 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.522 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.522 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.522 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.522 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.522 [2024-11-19 10:35:38.126469] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:48.522 [2024-11-19 10:35:38.126521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746920 ] 00:07:48.522 [2024-11-19 10:35:38.204109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.522 [2024-11-19 10:35:38.250982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.522 [2024-11-19 10:35:38.251012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.522 [2024-11-19 10:35:38.251013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3747036 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3747036 /var/tmp/spdk2.sock 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3747036 /var/tmp/spdk2.sock 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3747036 /var/tmp/spdk2.sock 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3747036 ']' 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:49.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.459 10:35:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.459 [2024-11-19 10:35:39.002742] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:49.459 [2024-11-19 10:35:39.002790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747036 ] 00:07:49.459 [2024-11-19 10:35:39.094277] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3746920 has claimed it. 00:07:49.459 [2024-11-19 10:35:39.094317] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:50.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3747036) - No such process 00:07:50.027 ERROR: process (pid: 3747036) is no longer running 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3746920 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3746920 ']' 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3746920 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3746920 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3746920' 00:07:50.027 killing process with pid 3746920 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3746920 00:07:50.027 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3746920 00:07:50.287 00:07:50.287 real 0m1.916s 00:07:50.287 user 0m5.522s 00:07:50.287 sys 0m0.420s 00:07:50.287 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.287 10:35:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.287 ************************************ 00:07:50.287 END TEST locking_overlapped_coremask 00:07:50.287 ************************************ 00:07:50.287 10:35:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:50.287 10:35:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.287 10:35:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.287 10:35:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.287 ************************************ 00:07:50.287 START TEST locking_overlapped_coremask_via_rpc 00:07:50.287 ************************************ 00:07:50.287 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:50.287 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:50.287 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3747292 00:07:50.287 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3747292 /var/tmp/spdk.sock 00:07:50.287 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3747292 ']' 00:07:50.287 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.287 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.287 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.287 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.287 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.546 [2024-11-19 10:35:40.103226] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:50.546 [2024-11-19 10:35:40.103265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747292 ] 00:07:50.546 [2024-11-19 10:35:40.177363] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:50.546 [2024-11-19 10:35:40.177395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.546 [2024-11-19 10:35:40.220641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.546 [2024-11-19 10:35:40.220750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.546 [2024-11-19 10:35:40.220751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.808 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.808 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:50.808 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3747300 00:07:50.808 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3747300 /var/tmp/spdk2.sock 00:07:50.808 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:50.808 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3747300 ']' 00:07:50.808 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.808 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.808 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.808 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.808 10:35:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.808 [2024-11-19 10:35:40.493781] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:50.808 [2024-11-19 10:35:40.493830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747300 ] 00:07:50.808 [2024-11-19 10:35:40.584366] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:50.808 [2024-11-19 10:35:40.584399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.067 [2024-11-19 10:35:40.671957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.067 [2024-11-19 10:35:40.672071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.067 [2024-11-19 10:35:40.672072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.634 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.635 [2024-11-19 10:35:41.340276] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3747292 has claimed it. 00:07:51.635 request: 00:07:51.635 { 00:07:51.635 "method": "framework_enable_cpumask_locks", 00:07:51.635 "req_id": 1 00:07:51.635 } 00:07:51.635 Got JSON-RPC error response 00:07:51.635 response: 00:07:51.635 { 00:07:51.635 "code": -32603, 00:07:51.635 "message": "Failed to claim CPU core: 2" 00:07:51.635 } 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3747292 /var/tmp/spdk.sock 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3747292 ']' 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.635 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.894 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.894 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:51.894 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3747300 /var/tmp/spdk2.sock 00:07:51.894 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3747300 ']' 00:07:51.894 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.894 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.894 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.894 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.894 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.153 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:52.153 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:52.153 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:52.153 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:52.153 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:52.153 00:07:52.153 real 0m1.692s 00:07:52.153 user 0m0.826s 00:07:52.153 sys 0m0.129s 00:07:52.153 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.153 10:35:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 ************************************ 00:07:52.153 END TEST locking_overlapped_coremask_via_rpc 00:07:52.153 ************************************ 00:07:52.153 10:35:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:52.153 10:35:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3747292 ]] 00:07:52.153 10:35:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3747292 00:07:52.153 10:35:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3747292 ']' 00:07:52.153 10:35:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3747292 00:07:52.154 10:35:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:52.154 10:35:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.154 10:35:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3747292 00:07:52.154 10:35:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.154 10:35:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.154 10:35:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3747292' 00:07:52.154 killing process with pid 3747292 00:07:52.154 10:35:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3747292 00:07:52.154 10:35:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3747292 00:07:52.413 10:35:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3747300 ]] 00:07:52.413 10:35:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3747300 00:07:52.413 10:35:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3747300 ']' 00:07:52.413 10:35:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3747300 00:07:52.413 10:35:42 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:52.413 10:35:42 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.413 10:35:42 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3747300 00:07:52.672 10:35:42 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:52.672 10:35:42 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:52.672 10:35:42 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3747300' 00:07:52.672 killing process with pid 3747300 00:07:52.672 10:35:42 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3747300 00:07:52.672 10:35:42 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3747300 00:07:52.932 10:35:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:52.932 10:35:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:52.932 10:35:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3747292 ]] 00:07:52.932 10:35:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3747292 00:07:52.932 10:35:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3747292 ']' 00:07:52.932 10:35:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3747292 00:07:52.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3747292) - No such process 00:07:52.932 10:35:42 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3747292 is not found' 00:07:52.932 Process with pid 3747292 is not found 00:07:52.932 10:35:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3747300 ]] 00:07:52.932 10:35:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3747300 00:07:52.932 10:35:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3747300 ']' 00:07:52.932 10:35:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3747300 00:07:52.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3747300) - No such process 00:07:52.932 10:35:42 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3747300 is not found' 00:07:52.932 Process with pid 3747300 is not found 00:07:52.932 10:35:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:52.932 00:07:52.932 real 0m14.295s 00:07:52.932 user 0m25.729s 00:07:52.932 sys 0m4.792s 00:07:52.932 10:35:42 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.932 10:35:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.932 ************************************ 00:07:52.932 END TEST cpu_locks 00:07:52.932 ************************************ 00:07:52.932 00:07:52.932 real 0m38.972s 00:07:52.932 user 1m14.929s 00:07:52.932 sys 0m8.318s 00:07:52.932 10:35:42 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.932 10:35:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:52.932 ************************************ 00:07:52.932 END TEST event 00:07:52.932 ************************************ 00:07:52.932 10:35:42 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:52.932 10:35:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.932 10:35:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.932 10:35:42 -- common/autotest_common.sh@10 -- # set +x 00:07:52.932 ************************************ 00:07:52.932 START TEST thread 00:07:52.932 ************************************ 00:07:52.932 10:35:42 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:52.932 * Looking for test storage... 00:07:52.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:52.932 10:35:42 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.932 10:35:42 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.932 10:35:42 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:53.192 10:35:42 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:53.192 10:35:42 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.192 10:35:42 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.192 10:35:42 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.192 10:35:42 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.192 10:35:42 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.192 10:35:42 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.192 10:35:42 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.192 10:35:42 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.192 10:35:42 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.192 10:35:42 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.192 10:35:42 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.192 10:35:42 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:53.192 10:35:42 thread -- scripts/common.sh@345 -- # : 1 00:07:53.192 10:35:42 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.192 10:35:42 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.192 10:35:42 thread -- scripts/common.sh@365 -- # decimal 1 00:07:53.192 10:35:42 thread -- scripts/common.sh@353 -- # local d=1 00:07:53.192 10:35:42 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.192 10:35:42 thread -- scripts/common.sh@355 -- # echo 1 00:07:53.192 10:35:42 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.192 10:35:42 thread -- scripts/common.sh@366 -- # decimal 2 00:07:53.192 10:35:42 thread -- scripts/common.sh@353 -- # local d=2 00:07:53.192 10:35:42 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.192 10:35:42 thread -- scripts/common.sh@355 -- # echo 2 00:07:53.192 10:35:42 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.192 10:35:42 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.192 10:35:42 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.192 10:35:42 thread -- scripts/common.sh@368 -- # return 0 00:07:53.192 10:35:42 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.192 10:35:42 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:53.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.192 --rc genhtml_branch_coverage=1 00:07:53.192 --rc genhtml_function_coverage=1 00:07:53.192 --rc genhtml_legend=1 00:07:53.192 --rc geninfo_all_blocks=1 00:07:53.192 --rc geninfo_unexecuted_blocks=1 00:07:53.192 00:07:53.192 ' 00:07:53.192 10:35:42 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:53.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.192 --rc genhtml_branch_coverage=1 00:07:53.192 --rc genhtml_function_coverage=1 00:07:53.192 --rc genhtml_legend=1 00:07:53.192 --rc geninfo_all_blocks=1 00:07:53.192 --rc geninfo_unexecuted_blocks=1 00:07:53.192 00:07:53.192 ' 00:07:53.192 10:35:42 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:53.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.192 --rc genhtml_branch_coverage=1 00:07:53.192 --rc genhtml_function_coverage=1 00:07:53.192 --rc genhtml_legend=1 00:07:53.192 --rc geninfo_all_blocks=1 00:07:53.192 --rc geninfo_unexecuted_blocks=1 00:07:53.192 00:07:53.192 ' 00:07:53.192 10:35:42 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:53.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.192 --rc genhtml_branch_coverage=1 00:07:53.192 --rc genhtml_function_coverage=1 00:07:53.192 --rc genhtml_legend=1 00:07:53.192 --rc geninfo_all_blocks=1 00:07:53.192 --rc geninfo_unexecuted_blocks=1 00:07:53.192 00:07:53.192 ' 00:07:53.192 10:35:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:53.192 10:35:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:53.192 10:35:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.192 10:35:42 thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.192 ************************************ 00:07:53.192 START TEST thread_poller_perf 00:07:53.192 ************************************ 00:07:53.192 10:35:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:53.192 [2024-11-19 10:35:42.850362] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:53.192 [2024-11-19 10:35:42.850418] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747865 ] 00:07:53.192 [2024-11-19 10:35:42.925034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.192 [2024-11-19 10:35:42.964592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.192 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:54.570 [2024-11-19T09:35:44.362Z] ====================================== 00:07:54.570 [2024-11-19T09:35:44.362Z] busy:2105738566 (cyc) 00:07:54.570 [2024-11-19T09:35:44.362Z] total_run_count: 421000 00:07:54.570 [2024-11-19T09:35:44.362Z] tsc_hz: 2100000000 (cyc) 00:07:54.570 [2024-11-19T09:35:44.362Z] ====================================== 00:07:54.570 [2024-11-19T09:35:44.362Z] poller_cost: 5001 (cyc), 2381 (nsec) 00:07:54.570 00:07:54.570 real 0m1.179s 00:07:54.570 user 0m1.100s 00:07:54.570 sys 0m0.075s 00:07:54.570 10:35:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.570 10:35:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.570 ************************************ 00:07:54.570 END TEST thread_poller_perf 00:07:54.570 ************************************ 00:07:54.570 10:35:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:54.570 10:35:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:54.570 10:35:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.570 10:35:44 thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.570 ************************************ 00:07:54.570 START TEST thread_poller_perf 00:07:54.570 ************************************ 00:07:54.570 10:35:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:54.570 [2024-11-19 10:35:44.101835] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:54.570 [2024-11-19 10:35:44.101909] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748114 ] 00:07:54.570 [2024-11-19 10:35:44.177475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.570 [2024-11-19 10:35:44.216527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.570 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:55.507 [2024-11-19T09:35:45.299Z] ====================================== 00:07:55.507 [2024-11-19T09:35:45.299Z] busy:2101270672 (cyc) 00:07:55.507 [2024-11-19T09:35:45.299Z] total_run_count: 5554000 00:07:55.507 [2024-11-19T09:35:45.299Z] tsc_hz: 2100000000 (cyc) 00:07:55.507 [2024-11-19T09:35:45.299Z] ====================================== 00:07:55.507 [2024-11-19T09:35:45.299Z] poller_cost: 378 (cyc), 180 (nsec) 00:07:55.507 00:07:55.507 real 0m1.175s 00:07:55.507 user 0m1.093s 00:07:55.507 sys 0m0.077s 00:07:55.507 10:35:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.507 10:35:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:55.507 ************************************ 00:07:55.507 END TEST thread_poller_perf 00:07:55.507 ************************************ 00:07:55.507 10:35:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:55.508 00:07:55.508 real 0m2.674s 00:07:55.508 user 0m2.353s 00:07:55.508 sys 0m0.334s 00:07:55.508 10:35:45 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.508 10:35:45 thread -- common/autotest_common.sh@10 -- # set +x 00:07:55.508 ************************************ 00:07:55.508 END TEST thread 00:07:55.508 ************************************ 00:07:55.767 10:35:45 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:55.767 10:35:45 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:55.767 10:35:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.767 10:35:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.767 10:35:45 -- common/autotest_common.sh@10 -- # set +x 00:07:55.767 ************************************ 00:07:55.767 START TEST app_cmdline 00:07:55.767 ************************************ 00:07:55.767 10:35:45 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:55.767 * Looking for test storage... 00:07:55.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:55.767 10:35:45 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:55.767 10:35:45 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:55.767 10:35:45 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.768 10:35:45 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:55.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.768 --rc genhtml_branch_coverage=1 00:07:55.768 --rc genhtml_function_coverage=1 00:07:55.768 --rc genhtml_legend=1 00:07:55.768 --rc geninfo_all_blocks=1 00:07:55.768 --rc geninfo_unexecuted_blocks=1 00:07:55.768 00:07:55.768 ' 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:55.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.768 --rc genhtml_branch_coverage=1 00:07:55.768 --rc genhtml_function_coverage=1 00:07:55.768 --rc genhtml_legend=1 00:07:55.768 --rc geninfo_all_blocks=1 00:07:55.768 --rc geninfo_unexecuted_blocks=1 00:07:55.768 00:07:55.768 ' 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:55.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.768 --rc genhtml_branch_coverage=1 00:07:55.768 --rc genhtml_function_coverage=1 00:07:55.768 --rc genhtml_legend=1 00:07:55.768 --rc geninfo_all_blocks=1 00:07:55.768 --rc geninfo_unexecuted_blocks=1 00:07:55.768 00:07:55.768 ' 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:55.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.768 --rc genhtml_branch_coverage=1 00:07:55.768 --rc genhtml_function_coverage=1 00:07:55.768 --rc genhtml_legend=1 00:07:55.768 --rc geninfo_all_blocks=1 00:07:55.768 --rc geninfo_unexecuted_blocks=1 00:07:55.768 00:07:55.768 ' 00:07:55.768 10:35:45 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:55.768 10:35:45 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3748407 00:07:55.768 10:35:45 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:55.768 10:35:45 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3748407 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3748407 ']' 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.768 10:35:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:56.028 [2024-11-19 10:35:45.592535] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:56.028 [2024-11-19 10:35:45.592577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748407 ] 00:07:56.028 [2024-11-19 10:35:45.664231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.028 [2024-11-19 10:35:45.707358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.287 10:35:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.287 10:35:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:56.287 10:35:45 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:56.546 { 00:07:56.546 "version": "SPDK v25.01-pre git sha1 a0c128549", 00:07:56.546 "fields": { 00:07:56.546 "major": 25, 00:07:56.546 "minor": 1, 00:07:56.546 "patch": 0, 00:07:56.546 "suffix": "-pre", 00:07:56.546 "commit": "a0c128549" 00:07:56.546 } 00:07:56.546 } 00:07:56.546 10:35:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:56.546 10:35:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:56.546 10:35:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:56.546 10:35:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:56.546 10:35:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:56.546 10:35:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.546 10:35:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.546 10:35:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:56.546 10:35:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:56.546 10:35:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:56.546 10:35:46 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.546 request: 00:07:56.546 { 00:07:56.546 "method": "env_dpdk_get_mem_stats", 00:07:56.546 "req_id": 1 00:07:56.546 } 00:07:56.546 Got JSON-RPC error response 00:07:56.546 response: 00:07:56.546 { 00:07:56.546 "code": -32601, 00:07:56.546 "message": "Method not found" 00:07:56.546 } 00:07:56.805 10:35:46 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:56.805 10:35:46 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.805 10:35:46 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.805 10:35:46 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.806 10:35:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3748407 00:07:56.806 10:35:46 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3748407 ']' 00:07:56.806 10:35:46 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3748407 00:07:56.806 10:35:46 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:56.806 10:35:46 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.806 10:35:46 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3748407 00:07:56.806 10:35:46 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.806 10:35:46 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.806 10:35:46 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3748407' 00:07:56.806 killing process with pid 3748407 00:07:56.806 10:35:46 app_cmdline -- common/autotest_common.sh@973 -- # kill 3748407 00:07:56.806 10:35:46 app_cmdline -- common/autotest_common.sh@978 -- # wait 3748407 00:07:57.065 00:07:57.065 real 0m1.340s 00:07:57.065 user 0m1.562s 00:07:57.065 sys 0m0.456s 00:07:57.065 10:35:46 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.065 10:35:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:57.065 ************************************ 00:07:57.065 END TEST app_cmdline 00:07:57.065 ************************************ 00:07:57.065 10:35:46 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:57.065 10:35:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.065 10:35:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.065 10:35:46 -- common/autotest_common.sh@10 -- # set +x 00:07:57.065 ************************************ 00:07:57.065 START TEST version 00:07:57.065 ************************************ 00:07:57.065 10:35:46 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:57.325 * Looking for test storage... 00:07:57.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:57.325 10:35:46 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.325 10:35:46 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.325 10:35:46 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.325 10:35:46 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.325 10:35:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.325 10:35:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.325 10:35:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.325 10:35:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.325 10:35:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.325 10:35:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.325 10:35:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.325 10:35:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.325 10:35:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.325 10:35:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.325 10:35:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.325 10:35:46 version -- scripts/common.sh@344 -- # case "$op" in 00:07:57.325 10:35:46 version -- scripts/common.sh@345 -- # : 1 00:07:57.325 10:35:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.325 10:35:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.325 10:35:46 version -- scripts/common.sh@365 -- # decimal 1 00:07:57.325 10:35:46 version -- scripts/common.sh@353 -- # local d=1 00:07:57.325 10:35:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.326 10:35:46 version -- scripts/common.sh@355 -- # echo 1 00:07:57.326 10:35:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.326 10:35:46 version -- scripts/common.sh@366 -- # decimal 2 00:07:57.326 10:35:46 version -- scripts/common.sh@353 -- # local d=2 00:07:57.326 10:35:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.326 10:35:46 version -- scripts/common.sh@355 -- # echo 2 00:07:57.326 10:35:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.326 10:35:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.326 10:35:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.326 10:35:46 version -- scripts/common.sh@368 -- # return 0 00:07:57.326 10:35:46 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.326 10:35:46 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.326 --rc genhtml_branch_coverage=1 00:07:57.326 --rc genhtml_function_coverage=1 00:07:57.326 --rc genhtml_legend=1 00:07:57.326 --rc geninfo_all_blocks=1 00:07:57.326 --rc geninfo_unexecuted_blocks=1 00:07:57.326 00:07:57.326 ' 00:07:57.326 10:35:46 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.326 --rc genhtml_branch_coverage=1 00:07:57.326 --rc genhtml_function_coverage=1 00:07:57.326 --rc genhtml_legend=1 00:07:57.326 --rc geninfo_all_blocks=1 00:07:57.326 --rc geninfo_unexecuted_blocks=1 00:07:57.326 00:07:57.326 ' 00:07:57.326 10:35:46 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.326 --rc genhtml_branch_coverage=1 00:07:57.326 --rc genhtml_function_coverage=1 00:07:57.326 --rc genhtml_legend=1 00:07:57.326 --rc geninfo_all_blocks=1 00:07:57.326 --rc geninfo_unexecuted_blocks=1 00:07:57.326 00:07:57.326 ' 00:07:57.326 10:35:46 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.326 --rc genhtml_branch_coverage=1 00:07:57.326 --rc genhtml_function_coverage=1 00:07:57.326 --rc genhtml_legend=1 00:07:57.326 --rc geninfo_all_blocks=1 00:07:57.326 --rc geninfo_unexecuted_blocks=1 00:07:57.326 00:07:57.326 ' 00:07:57.326 10:35:46 version -- app/version.sh@17 -- # get_header_version major 00:07:57.326 10:35:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.326 10:35:46 version -- app/version.sh@14 -- # cut -f2 00:07:57.326 10:35:46 version -- app/version.sh@14 -- # tr -d '"' 00:07:57.326 10:35:46 version -- app/version.sh@17 -- # major=25 00:07:57.326 10:35:46 version -- app/version.sh@18 -- # get_header_version minor 00:07:57.326 10:35:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.326 10:35:46 version -- app/version.sh@14 -- # cut -f2 00:07:57.326 10:35:46 version -- app/version.sh@14 -- # tr -d '"' 00:07:57.326 10:35:46 version -- app/version.sh@18 -- # minor=1 00:07:57.326 10:35:46 version -- app/version.sh@19 -- # get_header_version patch 00:07:57.326 10:35:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.326 10:35:46 version -- app/version.sh@14 -- # cut -f2 00:07:57.326 10:35:46 version -- app/version.sh@14 -- # tr -d '"' 00:07:57.326 10:35:46 version -- app/version.sh@19 -- # patch=0 00:07:57.326 10:35:46 version -- app/version.sh@20 -- # get_header_version suffix 00:07:57.326 10:35:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.326 10:35:46 version -- app/version.sh@14 -- # cut -f2 00:07:57.326 10:35:46 version -- app/version.sh@14 -- # tr -d '"' 00:07:57.326 10:35:46 version -- app/version.sh@20 -- # suffix=-pre 00:07:57.326 10:35:46 version -- app/version.sh@22 -- # version=25.1 00:07:57.326 10:35:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:57.326 10:35:46 version -- app/version.sh@28 -- # version=25.1rc0 00:07:57.326 10:35:46 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:57.326 10:35:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:57.326 10:35:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:57.326 10:35:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:57.326 00:07:57.326 real 0m0.246s 00:07:57.326 user 0m0.157s 00:07:57.326 sys 0m0.131s 00:07:57.326 10:35:47 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.326 10:35:47 version -- common/autotest_common.sh@10 -- # set +x 00:07:57.326 ************************************ 00:07:57.326 END TEST version 00:07:57.326 ************************************ 00:07:57.326 10:35:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:57.326 10:35:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:57.326 10:35:47 -- spdk/autotest.sh@194 -- # uname -s 00:07:57.326 10:35:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:57.326 10:35:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:57.326 10:35:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:57.326 10:35:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:57.326 10:35:47 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:57.326 10:35:47 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:57.326 10:35:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:57.326 10:35:47 -- common/autotest_common.sh@10 -- # set +x 00:07:57.326 10:35:47 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:57.326 10:35:47 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:57.326 10:35:47 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:57.326 10:35:47 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:57.326 10:35:47 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:57.326 10:35:47 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:57.326 10:35:47 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:57.326 10:35:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.326 10:35:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.326 10:35:47 -- common/autotest_common.sh@10 -- # set +x 00:07:57.586 ************************************ 00:07:57.586 START TEST nvmf_tcp 00:07:57.586 ************************************ 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:57.586 * Looking for test storage... 00:07:57.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.586 10:35:47 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.586 --rc genhtml_branch_coverage=1 00:07:57.586 --rc genhtml_function_coverage=1 00:07:57.586 --rc genhtml_legend=1 00:07:57.586 --rc geninfo_all_blocks=1 00:07:57.586 --rc geninfo_unexecuted_blocks=1 00:07:57.586 00:07:57.586 ' 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.586 --rc genhtml_branch_coverage=1 00:07:57.586 --rc genhtml_function_coverage=1 00:07:57.586 --rc genhtml_legend=1 00:07:57.586 --rc geninfo_all_blocks=1 00:07:57.586 --rc geninfo_unexecuted_blocks=1 00:07:57.586 00:07:57.586 ' 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.586 --rc genhtml_branch_coverage=1 00:07:57.586 --rc genhtml_function_coverage=1 00:07:57.586 --rc genhtml_legend=1 00:07:57.586 --rc geninfo_all_blocks=1 00:07:57.586 --rc geninfo_unexecuted_blocks=1 00:07:57.586 00:07:57.586 ' 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.586 --rc genhtml_branch_coverage=1 00:07:57.586 --rc genhtml_function_coverage=1 00:07:57.586 --rc genhtml_legend=1 00:07:57.586 --rc geninfo_all_blocks=1 00:07:57.586 --rc geninfo_unexecuted_blocks=1 00:07:57.586 00:07:57.586 ' 00:07:57.586 10:35:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:57.586 10:35:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:57.586 10:35:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.586 10:35:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.586 ************************************ 00:07:57.586 START TEST nvmf_target_core 00:07:57.586 ************************************ 00:07:57.586 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:57.846 * Looking for test storage... 00:07:57.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.846 --rc genhtml_branch_coverage=1 00:07:57.846 --rc genhtml_function_coverage=1 00:07:57.846 --rc genhtml_legend=1 00:07:57.846 --rc geninfo_all_blocks=1 00:07:57.846 --rc geninfo_unexecuted_blocks=1 00:07:57.846 00:07:57.846 ' 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.846 --rc genhtml_branch_coverage=1 00:07:57.846 --rc genhtml_function_coverage=1 00:07:57.846 --rc genhtml_legend=1 00:07:57.846 --rc geninfo_all_blocks=1 00:07:57.846 --rc geninfo_unexecuted_blocks=1 00:07:57.846 00:07:57.846 ' 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.846 --rc genhtml_branch_coverage=1 00:07:57.846 --rc genhtml_function_coverage=1 00:07:57.846 --rc genhtml_legend=1 00:07:57.846 --rc geninfo_all_blocks=1 00:07:57.846 --rc geninfo_unexecuted_blocks=1 00:07:57.846 00:07:57.846 ' 00:07:57.846 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.846 --rc genhtml_branch_coverage=1 00:07:57.846 --rc genhtml_function_coverage=1 00:07:57.846 --rc genhtml_legend=1 00:07:57.846 --rc geninfo_all_blocks=1 00:07:57.847 --rc geninfo_unexecuted_blocks=1 00:07:57.847 00:07:57.847 ' 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.847 ************************************ 00:07:57.847 START TEST nvmf_abort 00:07:57.847 ************************************ 00:07:57.847 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:58.107 * Looking for test storage... 00:07:58.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.107 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:58.107 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:58.107 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:58.107 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:58.107 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.107 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.107 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.107 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.107 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.107 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.107 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:58.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.108 --rc genhtml_branch_coverage=1 00:07:58.108 --rc genhtml_function_coverage=1 00:07:58.108 --rc genhtml_legend=1 00:07:58.108 --rc geninfo_all_blocks=1 00:07:58.108 --rc geninfo_unexecuted_blocks=1 00:07:58.108 00:07:58.108 ' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:58.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.108 --rc genhtml_branch_coverage=1 00:07:58.108 --rc genhtml_function_coverage=1 00:07:58.108 --rc genhtml_legend=1 00:07:58.108 --rc geninfo_all_blocks=1 00:07:58.108 --rc geninfo_unexecuted_blocks=1 00:07:58.108 00:07:58.108 ' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:58.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.108 --rc genhtml_branch_coverage=1 00:07:58.108 --rc genhtml_function_coverage=1 00:07:58.108 --rc genhtml_legend=1 00:07:58.108 --rc geninfo_all_blocks=1 00:07:58.108 --rc geninfo_unexecuted_blocks=1 00:07:58.108 00:07:58.108 ' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:58.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.108 --rc genhtml_branch_coverage=1 00:07:58.108 --rc genhtml_function_coverage=1 00:07:58.108 --rc genhtml_legend=1 00:07:58.108 --rc geninfo_all_blocks=1 00:07:58.108 --rc geninfo_unexecuted_blocks=1 00:07:58.108 00:07:58.108 ' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.108 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.109 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.681 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:04.682 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:04.682 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:04.682 Found net devices under 0000:86:00.0: cvl_0_0 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:04.682 Found net devices under 0000:86:00.1: cvl_0_1 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:08:04.682 00:08:04.682 --- 10.0.0.2 ping statistics --- 00:08:04.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.682 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:08:04.682 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:08:04.682 00:08:04.683 --- 10.0.0.1 ping statistics --- 00:08:04.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.683 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3752085 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3752085 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3752085 ']' 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.683 10:35:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:04.683 [2024-11-19 10:35:53.860426] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:04.683 [2024-11-19 10:35:53.860473] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.683 [2024-11-19 10:35:53.941869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.683 [2024-11-19 10:35:53.986416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.683 [2024-11-19 10:35:53.986447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.683 [2024-11-19 10:35:53.986454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.683 [2024-11-19 10:35:53.986460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.683 [2024-11-19 10:35:53.986465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.683 [2024-11-19 10:35:53.987681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.683 [2024-11-19 10:35:53.987762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.683 [2024-11-19 10:35:53.987764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.942 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.942 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:08:04.942 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.942 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.942 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.201 [2024-11-19 10:35:54.745498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.201 Malloc0 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.201 Delay0 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.201 [2024-11-19 10:35:54.824432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.201 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:05.201 [2024-11-19 10:35:54.961832] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:07.816 Initializing NVMe Controllers 00:08:07.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:07.816 controller IO queue size 128 less than required 00:08:07.816 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:07.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:07.816 Initialization complete. Launching workers. 00:08:07.816 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37832 00:08:07.816 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37897, failed to submit 62 00:08:07.816 success 37836, unsuccessful 61, failed 0 00:08:07.816 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:07.816 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.816 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:07.816 rmmod nvme_tcp 00:08:07.816 rmmod nvme_fabrics 00:08:07.816 rmmod nvme_keyring 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3752085 ']' 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3752085 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3752085 ']' 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3752085 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3752085 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3752085' 00:08:07.816 killing process with pid 3752085 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3752085 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3752085 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.816 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.722 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:09.722 00:08:09.722 real 0m11.796s 00:08:09.722 user 0m13.508s 00:08:09.722 sys 0m5.430s 00:08:09.722 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.722 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:09.722 ************************************ 00:08:09.722 END TEST nvmf_abort 00:08:09.722 ************************************ 00:08:09.722 10:35:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:09.722 10:35:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:09.722 10:35:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.722 10:35:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.722 ************************************ 00:08:09.722 START TEST nvmf_ns_hotplug_stress 00:08:09.722 ************************************ 00:08:09.722 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:09.983 * Looking for test storage... 00:08:09.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.983 --rc genhtml_branch_coverage=1 00:08:09.983 --rc genhtml_function_coverage=1 00:08:09.983 --rc genhtml_legend=1 00:08:09.983 --rc geninfo_all_blocks=1 00:08:09.983 --rc geninfo_unexecuted_blocks=1 00:08:09.983 00:08:09.983 ' 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.983 --rc genhtml_branch_coverage=1 00:08:09.983 --rc genhtml_function_coverage=1 00:08:09.983 --rc genhtml_legend=1 00:08:09.983 --rc geninfo_all_blocks=1 00:08:09.983 --rc geninfo_unexecuted_blocks=1 00:08:09.983 00:08:09.983 ' 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.983 --rc genhtml_branch_coverage=1 00:08:09.983 --rc genhtml_function_coverage=1 00:08:09.983 --rc genhtml_legend=1 00:08:09.983 --rc geninfo_all_blocks=1 00:08:09.983 --rc geninfo_unexecuted_blocks=1 00:08:09.983 00:08:09.983 ' 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.983 --rc genhtml_branch_coverage=1 00:08:09.983 --rc genhtml_function_coverage=1 00:08:09.983 --rc genhtml_legend=1 00:08:09.983 --rc geninfo_all_blocks=1 00:08:09.983 --rc geninfo_unexecuted_blocks=1 00:08:09.983 00:08:09.983 ' 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:09.983 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.984 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:16.559 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:16.559 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:16.559 Found net devices under 0000:86:00.0: cvl_0_0 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:16.559 Found net devices under 0000:86:00.1: cvl_0_1 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:16.559 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:08:16.560 00:08:16.560 --- 10.0.0.2 ping statistics --- 00:08:16.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.560 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:08:16.560 00:08:16.560 --- 10.0.0.1 ping statistics --- 00:08:16.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.560 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3756254 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3756254 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3756254 ']' 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.560 [2024-11-19 10:36:05.742146] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:16.560 [2024-11-19 10:36:05.742187] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.560 [2024-11-19 10:36:05.804623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:16.560 [2024-11-19 10:36:05.845749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.560 [2024-11-19 10:36:05.845783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.560 [2024-11-19 10:36:05.845791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.560 [2024-11-19 10:36:05.845798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.560 [2024-11-19 10:36:05.845804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.560 [2024-11-19 10:36:05.847263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.560 [2024-11-19 10:36:05.847290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.560 [2024-11-19 10:36:05.847291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:16.560 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:16.560 [2024-11-19 10:36:06.170975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.560 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:16.819 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.819 [2024-11-19 10:36:06.572423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.819 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.078 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:17.337 Malloc0 00:08:17.337 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:17.596 Delay0 00:08:17.596 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.854 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:17.854 NULL1 00:08:17.854 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:18.113 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3756739 00:08:18.113 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:18.113 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:18.113 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.372 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.631 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:18.631 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:18.631 true 00:08:18.889 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:18.889 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.889 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.148 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:19.148 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:19.407 true 00:08:19.407 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:19.407 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.344 Read completed with error (sct=0, sc=11) 00:08:20.344 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.603 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:20.603 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:20.861 true 00:08:20.861 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:20.861 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.120 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.379 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:21.379 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:21.379 true 00:08:21.379 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:21.379 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.757 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.757 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:22.757 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:23.016 true 00:08:23.016 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:23.016 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.954 10:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.954 10:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:23.954 10:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:24.213 true 00:08:24.213 10:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:24.213 10:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.472 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.472 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:24.731 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:24.731 true 00:08:24.731 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:24.731 10:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.110 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.110 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:26.110 10:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:26.372 true 00:08:26.372 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:26.372 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.309 10:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.568 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:27.568 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:27.568 true 00:08:27.568 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:27.568 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.827 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.086 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:28.086 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:28.086 true 00:08:28.345 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:28.346 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.282 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.541 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:29.541 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:29.800 true 00:08:29.800 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:29.800 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.736 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.736 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:30.736 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:30.995 true 00:08:30.995 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:30.995 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.995 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.254 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:31.254 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:31.513 true 00:08:31.513 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:31.513 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.891 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.891 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:32.891 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:33.150 true 00:08:33.150 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:33.150 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.086 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.086 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:34.086 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:34.343 true 00:08:34.343 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:34.343 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.343 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.600 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:34.600 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:34.858 true 00:08:34.858 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:34.858 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.233 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.233 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:36.233 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:36.492 true 00:08:36.492 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:36.492 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.428 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.428 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:37.428 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:37.687 true 00:08:37.687 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:37.687 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.687 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.945 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:37.945 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:38.204 true 00:08:38.204 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:38.204 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.141 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.400 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:39.400 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:39.658 true 00:08:39.658 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:39.659 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.595 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.595 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:40.595 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:40.854 true 00:08:40.854 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:40.854 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.113 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.371 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:41.371 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:41.371 true 00:08:41.371 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:41.371 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.749 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.749 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:42.749 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:43.007 true 00:08:43.007 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:43.007 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.943 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.943 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:43.943 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:44.202 true 00:08:44.202 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:44.202 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.461 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.719 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:44.719 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:44.719 true 00:08:44.719 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:44.719 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.096 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.096 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:46.096 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:46.354 true 00:08:46.354 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:46.354 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.289 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.289 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:47.289 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:47.548 true 00:08:47.548 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:47.548 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.807 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.065 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:48.065 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:48.065 true 00:08:48.065 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:48.066 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.448 Initializing NVMe Controllers 00:08:49.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:49.448 Controller IO queue size 128, less than required. 00:08:49.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:49.448 Controller IO queue size 128, less than required. 00:08:49.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:49.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:49.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:49.448 Initialization complete. Launching workers. 00:08:49.449 ======================================================== 00:08:49.449 Latency(us) 00:08:49.449 Device Information : IOPS MiB/s Average min max 00:08:49.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1944.16 0.95 45241.93 1818.94 1011931.42 00:08:49.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17202.13 8.40 7422.62 1564.32 443450.49 00:08:49.449 ======================================================== 00:08:49.449 Total : 19146.30 9.35 11262.89 1564.32 1011931.42 00:08:49.449 00:08:49.449 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.449 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:49.449 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:49.707 true 00:08:49.707 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3756739 00:08:49.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3756739) - No such process 00:08:49.707 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3756739 00:08:49.707 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.965 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:49.965 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:49.965 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:49.965 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:49.965 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:49.965 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:50.224 null0 00:08:50.224 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:50.224 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:50.224 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:50.482 null1 00:08:50.482 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:50.482 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:50.482 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:50.740 null2 00:08:50.740 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:50.740 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:50.740 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:50.740 null3 00:08:50.740 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:50.740 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:50.740 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:50.999 null4 00:08:50.999 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:50.999 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:50.999 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:51.258 null5 00:08:51.258 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:51.258 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:51.258 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:51.516 null6 00:08:51.516 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:51.516 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:51.516 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:51.516 null7 00:08:51.516 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:51.516 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:51.516 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3762744 3762745 3762746 3762747 3762749 3762751 3762754 3762757 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.517 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:51.776 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:51.776 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:51.776 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:51.776 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:51.776 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.776 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:51.776 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:51.776 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.034 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.035 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:52.319 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:52.319 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:52.319 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:52.319 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:52.319 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:52.319 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:52.319 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:52.319 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.601 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.602 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:52.602 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:52.602 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:52.602 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:52.602 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:52.602 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:52.602 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:52.602 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.602 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.895 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:53.155 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:53.155 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.155 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:53.155 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.155 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:53.155 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:53.155 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:53.155 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:53.155 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.155 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.156 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:53.156 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.156 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.156 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:53.156 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.156 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.156 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.156 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.156 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:53.156 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.416 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:53.416 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:53.416 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.416 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:53.416 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:53.416 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.416 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:53.416 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:53.416 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.675 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:53.935 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:53.935 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:53.935 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:53.935 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.935 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:53.935 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:53.935 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.935 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:53.935 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.935 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.935 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.194 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.195 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.195 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:54.195 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:54.195 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:54.195 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:54.195 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.453 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:54.453 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:54.454 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:54.454 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.454 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:54.713 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:54.713 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:54.713 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.713 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:54.713 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:54.713 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:54.713 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:54.713 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:54.973 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:55.232 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:55.232 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.232 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:55.232 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.232 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:55.232 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:55.232 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:55.232 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.232 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.232 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.232 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:55.492 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.492 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.492 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:55.492 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:55.492 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.492 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:55.492 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:55.492 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:55.492 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:55.492 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.492 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.752 rmmod nvme_tcp 00:08:55.752 rmmod nvme_fabrics 00:08:55.752 rmmod nvme_keyring 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.752 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:55.753 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:55.753 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3756254 ']' 00:08:55.753 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3756254 00:08:55.753 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3756254 ']' 00:08:55.753 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3756254 00:08:55.753 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:55.753 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.753 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3756254 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3756254' 00:08:56.013 killing process with pid 3756254 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3756254 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3756254 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.013 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.550 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.550 00:08:58.550 real 0m48.337s 00:08:58.550 user 3m16.663s 00:08:58.550 sys 0m15.920s 00:08:58.550 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.550 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.550 ************************************ 00:08:58.550 END TEST nvmf_ns_hotplug_stress 00:08:58.550 ************************************ 00:08:58.550 10:36:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:58.550 10:36:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.550 10:36:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.550 10:36:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.550 ************************************ 00:08:58.550 START TEST nvmf_delete_subsystem 00:08:58.550 ************************************ 00:08:58.550 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:58.550 * Looking for test storage... 00:08:58.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.550 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.550 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.550 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.550 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.550 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.550 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.550 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.550 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.551 --rc genhtml_branch_coverage=1 00:08:58.551 --rc genhtml_function_coverage=1 00:08:58.551 --rc genhtml_legend=1 00:08:58.551 --rc geninfo_all_blocks=1 00:08:58.551 --rc geninfo_unexecuted_blocks=1 00:08:58.551 00:08:58.551 ' 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.551 --rc genhtml_branch_coverage=1 00:08:58.551 --rc genhtml_function_coverage=1 00:08:58.551 --rc genhtml_legend=1 00:08:58.551 --rc geninfo_all_blocks=1 00:08:58.551 --rc geninfo_unexecuted_blocks=1 00:08:58.551 00:08:58.551 ' 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.551 --rc genhtml_branch_coverage=1 00:08:58.551 --rc genhtml_function_coverage=1 00:08:58.551 --rc genhtml_legend=1 00:08:58.551 --rc geninfo_all_blocks=1 00:08:58.551 --rc geninfo_unexecuted_blocks=1 00:08:58.551 00:08:58.551 ' 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.551 --rc genhtml_branch_coverage=1 00:08:58.551 --rc genhtml_function_coverage=1 00:08:58.551 --rc genhtml_legend=1 00:08:58.551 --rc geninfo_all_blocks=1 00:08:58.551 --rc geninfo_unexecuted_blocks=1 00:08:58.551 00:08:58.551 ' 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.551 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.552 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:05.122 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.122 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:05.123 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:05.123 Found net devices under 0000:86:00.0: cvl_0_0 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:05.123 Found net devices under 0000:86:00.1: cvl_0_1 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:05.123 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:05.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:09:05.123 00:09:05.123 --- 10.0.0.2 ping statistics --- 00:09:05.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.123 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:09:05.123 00:09:05.123 --- 10.0.0.1 ping statistics --- 00:09:05.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.123 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3767257 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3767257 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3767257 ']' 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.123 [2024-11-19 10:36:54.139667] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:09:05.123 [2024-11-19 10:36:54.139711] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.123 [2024-11-19 10:36:54.218220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.123 [2024-11-19 10:36:54.257579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.123 [2024-11-19 10:36:54.257616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.123 [2024-11-19 10:36:54.257622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.123 [2024-11-19 10:36:54.257628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.123 [2024-11-19 10:36:54.257633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.123 [2024-11-19 10:36:54.258852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.123 [2024-11-19 10:36:54.258852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.123 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.124 [2024-11-19 10:36:54.406299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.124 [2024-11-19 10:36:54.426527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.124 NULL1 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.124 Delay0 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3767383 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:05.124 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:05.124 [2024-11-19 10:36:54.537456] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:07.032 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.032 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.032 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 starting I/O failed: -6 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 starting I/O failed: -6 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 starting I/O failed: -6 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 starting I/O failed: -6 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 starting I/O failed: -6 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 starting I/O failed: -6 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 starting I/O failed: -6 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 starting I/O failed: -6 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 starting I/O failed: -6 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 starting I/O failed: -6 00:09:07.032 [2024-11-19 10:36:56.575731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3860 is same with the state(6) to be set 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.032 [2024-11-19 10:36:56.576097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e32c0 is same with the state(6) to be set 00:09:07.032 Write completed with error (sct=0, sc=8) 00:09:07.032 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 [2024-11-19 10:36:56.576305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e34a0 is same with the state(6) to be set 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 starting I/O failed: -6 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 starting I/O failed: -6 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 starting I/O failed: -6 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 starting I/O failed: -6 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 starting I/O failed: -6 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 starting I/O failed: -6 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 starting I/O failed: -6 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 starting I/O failed: -6 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 starting I/O failed: -6 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 starting I/O failed: -6 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 starting I/O failed: -6 00:09:07.033 Write completed with error (sct=0, sc=8) 00:09:07.033 Read completed with error (sct=0, sc=8) 00:09:07.033 [2024-11-19 10:36:56.576656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99f400d020 is same with the state(6) to be set 00:09:07.970 [2024-11-19 10:36:57.549693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e49a0 is same with the state(6) to be set 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Write completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 Read completed with error (sct=0, sc=8) 00:09:07.970 [2024-11-19 10:36:57.579455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99f4000c40 is same with the state(6) to be set 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 [2024-11-19 10:36:57.579683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99f400d350 is same with the state(6) to be set 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 [2024-11-19 10:36:57.579817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99f400d7c0 is same with the state(6) to be set 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Write completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 Read completed with error (sct=0, sc=8) 00:09:07.971 [2024-11-19 10:36:57.580257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3680 is same with the state(6) to be set 00:09:07.971 Initializing NVMe Controllers 00:09:07.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:07.971 Controller IO queue size 128, less than required. 00:09:07.971 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:07.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:07.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:07.971 Initialization complete. Launching workers. 00:09:07.971 ======================================================== 00:09:07.971 Latency(us) 00:09:07.971 Device Information : IOPS MiB/s Average min max 00:09:07.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.09 0.08 881862.89 387.51 1009295.25 00:09:07.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.53 0.08 1058180.20 526.97 2001956.40 00:09:07.971 ======================================================== 00:09:07.971 Total : 319.62 0.16 973174.96 387.51 2001956.40 00:09:07.971 00:09:07.971 [2024-11-19 10:36:57.580901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e49a0 (9): Bad file descriptor 00:09:07.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:07.971 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.971 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:07.971 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3767383 00:09:07.971 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3767383 00:09:08.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3767383) - No such process 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3767383 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3767383 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3767383 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.539 [2024-11-19 10:36:58.109644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3767895 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767895 00:09:08.539 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:08.539 [2024-11-19 10:36:58.199768] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:09.107 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:09.107 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767895 00:09:09.107 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:09.366 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:09.366 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767895 00:09:09.366 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:09.938 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:09.938 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767895 00:09:09.938 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:10.505 10:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:10.505 10:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767895 00:09:10.505 10:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:11.079 10:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:11.079 10:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767895 00:09:11.079 10:37:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:11.646 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:11.646 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767895 00:09:11.646 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:11.905 Initializing NVMe Controllers 00:09:11.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:11.905 Controller IO queue size 128, less than required. 00:09:11.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:11.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:11.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:11.905 Initialization complete. Launching workers. 00:09:11.905 ======================================================== 00:09:11.905 Latency(us) 00:09:11.905 Device Information : IOPS MiB/s Average min max 00:09:11.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002246.77 1000123.52 1042717.68 00:09:11.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004052.05 1000151.35 1042025.33 00:09:11.905 ======================================================== 00:09:11.905 Total : 256.00 0.12 1003149.41 1000123.52 1042717.68 00:09:11.905 00:09:11.905 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:11.905 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767895 00:09:11.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3767895) - No such process 00:09:11.905 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3767895 00:09:11.905 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:11.905 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:11.905 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.905 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:11.905 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.905 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:11.905 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.905 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.905 rmmod nvme_tcp 00:09:11.905 rmmod nvme_fabrics 00:09:12.165 rmmod nvme_keyring 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3767257 ']' 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3767257 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3767257 ']' 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3767257 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3767257 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3767257' 00:09:12.165 killing process with pid 3767257 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3767257 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3767257 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.165 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.424 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.424 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.424 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.424 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.330 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:14.330 00:09:14.330 real 0m16.147s 00:09:14.330 user 0m29.226s 00:09:14.330 sys 0m5.473s 00:09:14.330 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.330 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 ************************************ 00:09:14.330 END TEST nvmf_delete_subsystem 00:09:14.330 ************************************ 00:09:14.330 10:37:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:14.330 10:37:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.330 10:37:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.330 10:37:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 ************************************ 00:09:14.330 START TEST nvmf_host_management 00:09:14.330 ************************************ 00:09:14.330 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:14.590 * Looking for test storage... 00:09:14.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.590 --rc genhtml_branch_coverage=1 00:09:14.590 --rc genhtml_function_coverage=1 00:09:14.590 --rc genhtml_legend=1 00:09:14.590 --rc geninfo_all_blocks=1 00:09:14.590 --rc geninfo_unexecuted_blocks=1 00:09:14.590 00:09:14.590 ' 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.590 --rc genhtml_branch_coverage=1 00:09:14.590 --rc genhtml_function_coverage=1 00:09:14.590 --rc genhtml_legend=1 00:09:14.590 --rc geninfo_all_blocks=1 00:09:14.590 --rc geninfo_unexecuted_blocks=1 00:09:14.590 00:09:14.590 ' 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.590 --rc genhtml_branch_coverage=1 00:09:14.590 --rc genhtml_function_coverage=1 00:09:14.590 --rc genhtml_legend=1 00:09:14.590 --rc geninfo_all_blocks=1 00:09:14.590 --rc geninfo_unexecuted_blocks=1 00:09:14.590 00:09:14.590 ' 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.590 --rc genhtml_branch_coverage=1 00:09:14.590 --rc genhtml_function_coverage=1 00:09:14.590 --rc genhtml_legend=1 00:09:14.590 --rc geninfo_all_blocks=1 00:09:14.590 --rc geninfo_unexecuted_blocks=1 00:09:14.590 00:09:14.590 ' 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:14.590 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.591 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:21.164 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:21.164 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:21.164 Found net devices under 0000:86:00.0: cvl_0_0 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.164 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:21.165 Found net devices under 0000:86:00.1: cvl_0_1 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:09:21.165 00:09:21.165 --- 10.0.0.2 ping statistics --- 00:09:21.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.165 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:09:21.165 00:09:21.165 --- 10.0.0.1 ping statistics --- 00:09:21.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.165 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3772085 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3772085 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3772085 ']' 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.165 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.165 [2024-11-19 10:37:10.427774] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:09:21.165 [2024-11-19 10:37:10.427825] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.165 [2024-11-19 10:37:10.509805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.165 [2024-11-19 10:37:10.552476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.165 [2024-11-19 10:37:10.552515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.165 [2024-11-19 10:37:10.552522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.165 [2024-11-19 10:37:10.552530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.165 [2024-11-19 10:37:10.552535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.165 [2024-11-19 10:37:10.554187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.165 [2024-11-19 10:37:10.554295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.165 [2024-11-19 10:37:10.554396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.165 [2024-11-19 10:37:10.554396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.734 [2024-11-19 10:37:11.300863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.734 Malloc0 00:09:21.734 [2024-11-19 10:37:11.381161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3772356 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3772356 /var/tmp/bdevperf.sock 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3772356 ']' 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:21.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:21.734 { 00:09:21.734 "params": { 00:09:21.734 "name": "Nvme$subsystem", 00:09:21.734 "trtype": "$TEST_TRANSPORT", 00:09:21.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.734 "adrfam": "ipv4", 00:09:21.734 "trsvcid": "$NVMF_PORT", 00:09:21.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.734 "hdgst": ${hdgst:-false}, 00:09:21.734 "ddgst": ${ddgst:-false} 00:09:21.734 }, 00:09:21.734 "method": "bdev_nvme_attach_controller" 00:09:21.734 } 00:09:21.734 EOF 00:09:21.734 )") 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:21.734 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:21.734 "params": { 00:09:21.734 "name": "Nvme0", 00:09:21.734 "trtype": "tcp", 00:09:21.734 "traddr": "10.0.0.2", 00:09:21.734 "adrfam": "ipv4", 00:09:21.734 "trsvcid": "4420", 00:09:21.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:21.734 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:21.734 "hdgst": false, 00:09:21.734 "ddgst": false 00:09:21.734 }, 00:09:21.734 "method": "bdev_nvme_attach_controller" 00:09:21.734 }' 00:09:21.734 [2024-11-19 10:37:11.476371] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:09:21.734 [2024-11-19 10:37:11.476415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772356 ] 00:09:21.994 [2024-11-19 10:37:11.551306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.994 [2024-11-19 10:37:11.592807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.253 Running I/O for 10 seconds... 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:22.822 [2024-11-19 10:37:12.404484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5200 is same with the state(6) to be set 00:09:22.822 [2024-11-19 10:37:12.404555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5200 is same with the state(6) to be set 00:09:22.822 [2024-11-19 10:37:12.404563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5200 is same with the state(6) to be set 00:09:22.822 [2024-11-19 10:37:12.404570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5200 is same with the state(6) to be set 00:09:22.822 [2024-11-19 10:37:12.404576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5200 is same with the state(6) to be set 00:09:22.822 [2024-11-19 10:37:12.404582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5200 is same with the state(6) to be set 00:09:22.822 [2024-11-19 10:37:12.404588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5200 is same with the state(6) to be set 00:09:22.822 [2024-11-19 10:37:12.404594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5200 is same with the state(6) to be set 00:09:22.822 [2024-11-19 10:37:12.404600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5200 is same with the state(6) to be set 00:09:22.822 [2024-11-19 10:37:12.404606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5200 is same with the state(6) to be set 00:09:22.822 [2024-11-19 10:37:12.404611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5200 is same with the state(6) to be set 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.822 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:22.822 [2024-11-19 10:37:12.412270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.822 [2024-11-19 10:37:12.412482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.822 [2024-11-19 10:37:12.412490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.412987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.412995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.413002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.413010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.413016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.413024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.413031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.413038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.413045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.413053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.413059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.413067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.413073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.413087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.823 [2024-11-19 10:37:12.413094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.823 [2024-11-19 10:37:12.413102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.824 [2024-11-19 10:37:12.413294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:22.824 [2024-11-19 10:37:12.413401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:22.824 [2024-11-19 10:37:12.413417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:22.824 [2024-11-19 10:37:12.413434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:22.824 [2024-11-19 10:37:12.413452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.824 [2024-11-19 10:37:12.413460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023500 is same with the state(6) to be set 00:09:22.824 [2024-11-19 10:37:12.414329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:22.824 task offset: 24576 on job bdev=Nvme0n1 fails 00:09:22.824 00:09:22.824 Latency(us) 00:09:22.824 [2024-11-19T09:37:12.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.824 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:22.824 Job: Nvme0n1 ended in about 0.61 seconds with error 00:09:22.824 Verification LBA range: start 0x0 length 0x400 00:09:22.824 Nvme0n1 : 0.61 1979.33 123.71 104.18 0.00 30101.76 1739.82 27088.21 00:09:22.824 [2024-11-19T09:37:12.616Z] =================================================================================================================== 00:09:22.824 [2024-11-19T09:37:12.616Z] Total : 1979.33 123.71 104.18 0.00 30101.76 1739.82 27088.21 00:09:22.824 [2024-11-19 10:37:12.416659] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.824 [2024-11-19 10:37:12.416679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2023500 (9): Bad file descriptor 00:09:22.824 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.824 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:22.824 [2024-11-19 10:37:12.550389] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:23.760 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3772356 00:09:23.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3772356) - No such process 00:09:23.760 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:23.760 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:23.761 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:23.761 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:23.761 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:23.761 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:23.761 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:23.761 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:23.761 { 00:09:23.761 "params": { 00:09:23.761 "name": "Nvme$subsystem", 00:09:23.761 "trtype": "$TEST_TRANSPORT", 00:09:23.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.761 "adrfam": "ipv4", 00:09:23.761 "trsvcid": "$NVMF_PORT", 00:09:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.761 "hdgst": ${hdgst:-false}, 00:09:23.761 "ddgst": ${ddgst:-false} 00:09:23.761 }, 00:09:23.761 "method": "bdev_nvme_attach_controller" 00:09:23.761 } 00:09:23.761 EOF 00:09:23.761 )") 00:09:23.761 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:23.761 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:23.761 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:23.761 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:23.761 "params": { 00:09:23.761 "name": "Nvme0", 00:09:23.761 "trtype": "tcp", 00:09:23.761 "traddr": "10.0.0.2", 00:09:23.761 "adrfam": "ipv4", 00:09:23.761 "trsvcid": "4420", 00:09:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:23.761 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:23.761 "hdgst": false, 00:09:23.761 "ddgst": false 00:09:23.761 }, 00:09:23.761 "method": "bdev_nvme_attach_controller" 00:09:23.761 }' 00:09:23.761 [2024-11-19 10:37:13.470731] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:09:23.761 [2024-11-19 10:37:13.470778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772612 ] 00:09:23.761 [2024-11-19 10:37:13.547728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.020 [2024-11-19 10:37:13.587504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.020 Running I/O for 1 seconds... 00:09:25.399 2011.00 IOPS, 125.69 MiB/s 00:09:25.399 Latency(us) 00:09:25.399 [2024-11-19T09:37:15.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.400 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:25.400 Verification LBA range: start 0x0 length 0x400 00:09:25.400 Nvme0n1 : 1.01 2056.41 128.53 0.00 0.00 30535.96 1810.04 26838.55 00:09:25.400 [2024-11-19T09:37:15.192Z] =================================================================================================================== 00:09:25.400 [2024-11-19T09:37:15.192Z] Total : 2056.41 128.53 0.00 0.00 30535.96 1810.04 26838.55 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.400 rmmod nvme_tcp 00:09:25.400 rmmod nvme_fabrics 00:09:25.400 rmmod nvme_keyring 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3772085 ']' 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3772085 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3772085 ']' 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3772085 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.400 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3772085 00:09:25.400 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:25.400 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:25.400 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3772085' 00:09:25.400 killing process with pid 3772085 00:09:25.400 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3772085 00:09:25.400 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3772085 00:09:25.659 [2024-11-19 10:37:15.194454] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.659 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.568 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:27.568 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:27.568 00:09:27.568 real 0m13.197s 00:09:27.568 user 0m22.992s 00:09:27.568 sys 0m5.707s 00:09:27.568 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.568 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:27.568 ************************************ 00:09:27.568 END TEST nvmf_host_management 00:09:27.568 ************************************ 00:09:27.568 10:37:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:27.568 10:37:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.568 10:37:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.568 10:37:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.828 ************************************ 00:09:27.828 START TEST nvmf_lvol 00:09:27.829 ************************************ 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:27.829 * Looking for test storage... 00:09:27.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:27.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.829 --rc genhtml_branch_coverage=1 00:09:27.829 --rc genhtml_function_coverage=1 00:09:27.829 --rc genhtml_legend=1 00:09:27.829 --rc geninfo_all_blocks=1 00:09:27.829 --rc geninfo_unexecuted_blocks=1 00:09:27.829 00:09:27.829 ' 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:27.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.829 --rc genhtml_branch_coverage=1 00:09:27.829 --rc genhtml_function_coverage=1 00:09:27.829 --rc genhtml_legend=1 00:09:27.829 --rc geninfo_all_blocks=1 00:09:27.829 --rc geninfo_unexecuted_blocks=1 00:09:27.829 00:09:27.829 ' 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:27.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.829 --rc genhtml_branch_coverage=1 00:09:27.829 --rc genhtml_function_coverage=1 00:09:27.829 --rc genhtml_legend=1 00:09:27.829 --rc geninfo_all_blocks=1 00:09:27.829 --rc geninfo_unexecuted_blocks=1 00:09:27.829 00:09:27.829 ' 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:27.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.829 --rc genhtml_branch_coverage=1 00:09:27.829 --rc genhtml_function_coverage=1 00:09:27.829 --rc genhtml_legend=1 00:09:27.829 --rc geninfo_all_blocks=1 00:09:27.829 --rc geninfo_unexecuted_blocks=1 00:09:27.829 00:09:27.829 ' 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.829 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.830 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:34.403 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:34.403 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.403 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:34.404 Found net devices under 0000:86:00.0: cvl_0_0 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:34.404 Found net devices under 0000:86:00.1: cvl_0_1 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:09:34.404 00:09:34.404 --- 10.0.0.2 ping statistics --- 00:09:34.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.404 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:09:34.404 00:09:34.404 --- 10.0.0.1 ping statistics --- 00:09:34.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.404 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3776575 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3776575 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3776575 ']' 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:34.404 [2024-11-19 10:37:23.660160] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:09:34.404 [2024-11-19 10:37:23.660210] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.404 [2024-11-19 10:37:23.738555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:34.404 [2024-11-19 10:37:23.780115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.404 [2024-11-19 10:37:23.780150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.404 [2024-11-19 10:37:23.780157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.404 [2024-11-19 10:37:23.780162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.404 [2024-11-19 10:37:23.780167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.404 [2024-11-19 10:37:23.781479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.404 [2024-11-19 10:37:23.781586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.404 [2024-11-19 10:37:23.781586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.404 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:34.405 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.405 10:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:34.405 [2024-11-19 10:37:24.078512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.405 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.663 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:34.663 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.921 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:34.921 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:35.181 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:35.439 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7bb90f50-3d37-422c-8731-5772c0af7eb3 00:09:35.439 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7bb90f50-3d37-422c-8731-5772c0af7eb3 lvol 20 00:09:35.439 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cf81cc4d-5ac5-4bdb-ba76-6f82eed84e9f 00:09:35.439 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:35.698 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cf81cc4d-5ac5-4bdb-ba76-6f82eed84e9f 00:09:35.956 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:35.956 [2024-11-19 10:37:25.732767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.214 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:36.214 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3776874 00:09:36.214 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:36.214 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:37.589 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cf81cc4d-5ac5-4bdb-ba76-6f82eed84e9f MY_SNAPSHOT 00:09:37.589 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c81ac29f-352f-4250-8824-8babdc72f5c0 00:09:37.589 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cf81cc4d-5ac5-4bdb-ba76-6f82eed84e9f 30 00:09:37.847 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c81ac29f-352f-4250-8824-8babdc72f5c0 MY_CLONE 00:09:38.106 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=65eeacfb-1de8-45e2-811e-e6a800ed6a17 00:09:38.106 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 65eeacfb-1de8-45e2-811e-e6a800ed6a17 00:09:38.675 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3776874 00:09:46.784 Initializing NVMe Controllers 00:09:46.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:46.784 Controller IO queue size 128, less than required. 00:09:46.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:46.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:46.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:46.784 Initialization complete. Launching workers. 00:09:46.784 ======================================================== 00:09:46.784 Latency(us) 00:09:46.784 Device Information : IOPS MiB/s Average min max 00:09:46.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12001.20 46.88 10665.43 1508.81 104064.86 00:09:46.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11905.10 46.50 10754.50 3524.64 44413.11 00:09:46.784 ======================================================== 00:09:46.785 Total : 23906.30 93.38 10709.79 1508.81 104064.86 00:09:46.785 00:09:46.785 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:46.785 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cf81cc4d-5ac5-4bdb-ba76-6f82eed84e9f 00:09:47.043 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7bb90f50-3d37-422c-8731-5772c0af7eb3 00:09:47.302 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:47.302 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:47.302 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:47.302 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.302 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:47.302 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.302 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:47.302 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.302 10:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.302 rmmod nvme_tcp 00:09:47.302 rmmod nvme_fabrics 00:09:47.302 rmmod nvme_keyring 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3776575 ']' 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3776575 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3776575 ']' 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3776575 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3776575 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3776575' 00:09:47.302 killing process with pid 3776575 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3776575 00:09:47.302 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3776575 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.561 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.095 00:09:50.095 real 0m21.994s 00:09:50.095 user 1m3.069s 00:09:50.095 sys 0m7.680s 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:50.095 ************************************ 00:09:50.095 END TEST nvmf_lvol 00:09:50.095 ************************************ 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.095 ************************************ 00:09:50.095 START TEST nvmf_lvs_grow 00:09:50.095 ************************************ 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:50.095 * Looking for test storage... 00:09:50.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.095 --rc genhtml_branch_coverage=1 00:09:50.095 --rc genhtml_function_coverage=1 00:09:50.095 --rc genhtml_legend=1 00:09:50.095 --rc geninfo_all_blocks=1 00:09:50.095 --rc geninfo_unexecuted_blocks=1 00:09:50.095 00:09:50.095 ' 00:09:50.095 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.095 --rc genhtml_branch_coverage=1 00:09:50.095 --rc genhtml_function_coverage=1 00:09:50.095 --rc genhtml_legend=1 00:09:50.095 --rc geninfo_all_blocks=1 00:09:50.096 --rc geninfo_unexecuted_blocks=1 00:09:50.096 00:09:50.096 ' 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.096 --rc genhtml_branch_coverage=1 00:09:50.096 --rc genhtml_function_coverage=1 00:09:50.096 --rc genhtml_legend=1 00:09:50.096 --rc geninfo_all_blocks=1 00:09:50.096 --rc geninfo_unexecuted_blocks=1 00:09:50.096 00:09:50.096 ' 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.096 --rc genhtml_branch_coverage=1 00:09:50.096 --rc genhtml_function_coverage=1 00:09:50.096 --rc genhtml_legend=1 00:09:50.096 --rc geninfo_all_blocks=1 00:09:50.096 --rc geninfo_unexecuted_blocks=1 00:09:50.096 00:09:50.096 ' 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.096 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:56.821 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.821 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.821 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.821 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.821 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.821 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.821 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.821 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.821 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.821 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:56.822 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:56.822 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:56.822 Found net devices under 0000:86:00.0: cvl_0_0 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:56.822 Found net devices under 0000:86:00.1: cvl_0_1 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:09:56.822 00:09:56.822 --- 10.0.0.2 ping statistics --- 00:09:56.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.822 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:09:56.822 00:09:56.822 --- 10.0.0.1 ping statistics --- 00:09:56.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.822 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.822 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3782379 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3782379 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3782379 ']' 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:56.823 [2024-11-19 10:37:45.768568] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:09:56.823 [2024-11-19 10:37:45.768615] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.823 [2024-11-19 10:37:45.833192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.823 [2024-11-19 10:37:45.873105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.823 [2024-11-19 10:37:45.873140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.823 [2024-11-19 10:37:45.873147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.823 [2024-11-19 10:37:45.873152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.823 [2024-11-19 10:37:45.873157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.823 [2024-11-19 10:37:45.873740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.823 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:56.823 [2024-11-19 10:37:46.180292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:56.823 ************************************ 00:09:56.823 START TEST lvs_grow_clean 00:09:56.823 ************************************ 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:56.823 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:57.081 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=39f19b47-dbed-41c1-ba34-f3be7442acb1 00:09:57.081 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 00:09:57.081 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:57.081 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:57.081 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:57.081 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 lvol 150 00:09:57.340 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8d37eaf6-b4f5-4a3e-9508-ea0c6acf6ef8 00:09:57.340 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:57.340 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:57.599 [2024-11-19 10:37:47.183993] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:57.599 [2024-11-19 10:37:47.184038] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:57.599 true 00:09:57.599 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 00:09:57.599 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:57.857 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:57.857 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:57.857 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8d37eaf6-b4f5-4a3e-9508-ea0c6acf6ef8 00:09:58.116 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:58.374 [2024-11-19 10:37:47.938280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.374 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:58.374 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3782768 00:09:58.374 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:58.374 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:58.374 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3782768 /var/tmp/bdevperf.sock 00:09:58.374 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3782768 ']' 00:09:58.374 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:58.374 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.374 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:58.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:58.374 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.374 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:58.633 [2024-11-19 10:37:48.172050] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:09:58.633 [2024-11-19 10:37:48.172096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782768 ] 00:09:58.633 [2024-11-19 10:37:48.247185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.633 [2024-11-19 10:37:48.288843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.568 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.568 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:59.568 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:59.826 Nvme0n1 00:09:59.826 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:59.826 [ 00:09:59.826 { 00:09:59.826 "name": "Nvme0n1", 00:09:59.826 "aliases": [ 00:09:59.826 "8d37eaf6-b4f5-4a3e-9508-ea0c6acf6ef8" 00:09:59.826 ], 00:09:59.826 "product_name": "NVMe disk", 00:09:59.826 "block_size": 4096, 00:09:59.826 "num_blocks": 38912, 00:09:59.826 "uuid": "8d37eaf6-b4f5-4a3e-9508-ea0c6acf6ef8", 00:09:59.826 "numa_id": 1, 00:09:59.826 "assigned_rate_limits": { 00:09:59.826 "rw_ios_per_sec": 0, 00:09:59.826 "rw_mbytes_per_sec": 0, 00:09:59.826 "r_mbytes_per_sec": 0, 00:09:59.826 "w_mbytes_per_sec": 0 00:09:59.826 }, 00:09:59.826 "claimed": false, 00:09:59.826 "zoned": false, 00:09:59.826 "supported_io_types": { 00:09:59.826 "read": true, 00:09:59.826 "write": true, 00:09:59.826 "unmap": true, 00:09:59.826 "flush": true, 00:09:59.826 "reset": true, 00:09:59.826 "nvme_admin": true, 00:09:59.826 "nvme_io": true, 00:09:59.826 "nvme_io_md": false, 00:09:59.826 "write_zeroes": true, 00:09:59.826 "zcopy": false, 00:09:59.826 "get_zone_info": false, 00:09:59.826 "zone_management": false, 00:09:59.826 "zone_append": false, 00:09:59.826 "compare": true, 00:09:59.826 "compare_and_write": true, 00:09:59.826 "abort": true, 00:09:59.826 "seek_hole": false, 00:09:59.826 "seek_data": false, 00:09:59.826 "copy": true, 00:09:59.826 "nvme_iov_md": false 00:09:59.826 }, 00:09:59.826 "memory_domains": [ 00:09:59.826 { 00:09:59.826 "dma_device_id": "system", 00:09:59.826 "dma_device_type": 1 00:09:59.826 } 00:09:59.826 ], 00:09:59.826 "driver_specific": { 00:09:59.826 "nvme": [ 00:09:59.826 { 00:09:59.826 "trid": { 00:09:59.826 "trtype": "TCP", 00:09:59.826 "adrfam": "IPv4", 00:09:59.826 "traddr": "10.0.0.2", 00:09:59.826 "trsvcid": "4420", 00:09:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:59.826 }, 00:09:59.826 "ctrlr_data": { 00:09:59.826 "cntlid": 1, 00:09:59.826 "vendor_id": "0x8086", 00:09:59.826 "model_number": "SPDK bdev Controller", 00:09:59.826 "serial_number": "SPDK0", 00:09:59.826 "firmware_revision": "25.01", 00:09:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:59.826 "oacs": { 00:09:59.826 "security": 0, 00:09:59.826 "format": 0, 00:09:59.826 "firmware": 0, 00:09:59.826 "ns_manage": 0 00:09:59.826 }, 00:09:59.826 "multi_ctrlr": true, 00:09:59.826 "ana_reporting": false 00:09:59.826 }, 00:09:59.826 "vs": { 00:09:59.826 "nvme_version": "1.3" 00:09:59.826 }, 00:09:59.826 "ns_data": { 00:09:59.826 "id": 1, 00:09:59.826 "can_share": true 00:09:59.826 } 00:09:59.826 } 00:09:59.826 ], 00:09:59.826 "mp_policy": "active_passive" 00:09:59.826 } 00:09:59.826 } 00:09:59.826 ] 00:10:00.085 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3783003 00:10:00.085 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:00.085 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:00.085 Running I/O for 10 seconds... 00:10:01.021 Latency(us) 00:10:01.021 [2024-11-19T09:37:50.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.021 Nvme0n1 : 1.00 22759.00 88.90 0.00 0.00 0.00 0.00 0.00 00:10:01.021 [2024-11-19T09:37:50.813Z] =================================================================================================================== 00:10:01.021 [2024-11-19T09:37:50.813Z] Total : 22759.00 88.90 0.00 0.00 0.00 0.00 0.00 00:10:01.021 00:10:01.957 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 00:10:01.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.957 Nvme0n1 : 2.00 23032.00 89.97 0.00 0.00 0.00 0.00 0.00 00:10:01.957 [2024-11-19T09:37:51.749Z] =================================================================================================================== 00:10:01.957 [2024-11-19T09:37:51.749Z] Total : 23032.00 89.97 0.00 0.00 0.00 0.00 0.00 00:10:01.957 00:10:02.215 true 00:10:02.215 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 00:10:02.215 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:02.474 10:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:02.474 10:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:02.474 10:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3783003 00:10:03.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.040 Nvme0n1 : 3.00 23188.33 90.58 0.00 0.00 0.00 0.00 0.00 00:10:03.040 [2024-11-19T09:37:52.832Z] =================================================================================================================== 00:10:03.040 [2024-11-19T09:37:52.832Z] Total : 23188.33 90.58 0.00 0.00 0.00 0.00 0.00 00:10:03.040 00:10:03.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.975 Nvme0n1 : 4.00 23317.25 91.08 0.00 0.00 0.00 0.00 0.00 00:10:03.975 [2024-11-19T09:37:53.767Z] =================================================================================================================== 00:10:03.975 [2024-11-19T09:37:53.767Z] Total : 23317.25 91.08 0.00 0.00 0.00 0.00 0.00 00:10:03.975 00:10:05.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.350 Nvme0n1 : 5.00 23392.00 91.38 0.00 0.00 0.00 0.00 0.00 00:10:05.350 [2024-11-19T09:37:55.142Z] =================================================================================================================== 00:10:05.350 [2024-11-19T09:37:55.142Z] Total : 23392.00 91.38 0.00 0.00 0.00 0.00 0.00 00:10:05.350 00:10:06.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.286 Nvme0n1 : 6.00 23458.50 91.63 0.00 0.00 0.00 0.00 0.00 00:10:06.286 [2024-11-19T09:37:56.078Z] =================================================================================================================== 00:10:06.286 [2024-11-19T09:37:56.078Z] Total : 23458.50 91.63 0.00 0.00 0.00 0.00 0.00 00:10:06.286 00:10:07.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.222 Nvme0n1 : 7.00 23496.57 91.78 0.00 0.00 0.00 0.00 0.00 00:10:07.222 [2024-11-19T09:37:57.014Z] =================================================================================================================== 00:10:07.222 [2024-11-19T09:37:57.014Z] Total : 23496.57 91.78 0.00 0.00 0.00 0.00 0.00 00:10:07.222 00:10:08.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.158 Nvme0n1 : 8.00 23533.62 91.93 0.00 0.00 0.00 0.00 0.00 00:10:08.158 [2024-11-19T09:37:57.950Z] =================================================================================================================== 00:10:08.158 [2024-11-19T09:37:57.950Z] Total : 23533.62 91.93 0.00 0.00 0.00 0.00 0.00 00:10:08.158 00:10:09.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.093 Nvme0n1 : 9.00 23556.33 92.02 0.00 0.00 0.00 0.00 0.00 00:10:09.093 [2024-11-19T09:37:58.885Z] =================================================================================================================== 00:10:09.093 [2024-11-19T09:37:58.885Z] Total : 23556.33 92.02 0.00 0.00 0.00 0.00 0.00 00:10:09.093 00:10:10.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.036 Nvme0n1 : 10.00 23573.80 92.09 0.00 0.00 0.00 0.00 0.00 00:10:10.036 [2024-11-19T09:37:59.828Z] =================================================================================================================== 00:10:10.036 [2024-11-19T09:37:59.828Z] Total : 23573.80 92.09 0.00 0.00 0.00 0.00 0.00 00:10:10.036 00:10:10.036 00:10:10.036 Latency(us) 00:10:10.036 [2024-11-19T09:37:59.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.037 Nvme0n1 : 10.00 23578.56 92.10 0.00 0.00 5425.95 3105.16 10423.34 00:10:10.037 [2024-11-19T09:37:59.829Z] =================================================================================================================== 00:10:10.037 [2024-11-19T09:37:59.829Z] Total : 23578.56 92.10 0.00 0.00 5425.95 3105.16 10423.34 00:10:10.037 { 00:10:10.037 "results": [ 00:10:10.037 { 00:10:10.037 "job": "Nvme0n1", 00:10:10.037 "core_mask": "0x2", 00:10:10.037 "workload": "randwrite", 00:10:10.037 "status": "finished", 00:10:10.037 "queue_depth": 128, 00:10:10.037 "io_size": 4096, 00:10:10.037 "runtime": 10.003409, 00:10:10.037 "iops": 23578.562068190953, 00:10:10.037 "mibps": 92.10375807887091, 00:10:10.037 "io_failed": 0, 00:10:10.037 "io_timeout": 0, 00:10:10.037 "avg_latency_us": 5425.950216995688, 00:10:10.037 "min_latency_us": 3105.158095238095, 00:10:10.037 "max_latency_us": 10423.344761904762 00:10:10.037 } 00:10:10.037 ], 00:10:10.037 "core_count": 1 00:10:10.037 } 00:10:10.037 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3782768 00:10:10.037 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3782768 ']' 00:10:10.037 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3782768 00:10:10.037 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:10.037 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.037 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3782768 00:10:10.037 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:10.037 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:10.037 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3782768' 00:10:10.037 killing process with pid 3782768 00:10:10.037 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3782768 00:10:10.037 Received shutdown signal, test time was about 10.000000 seconds 00:10:10.037 00:10:10.037 Latency(us) 00:10:10.037 [2024-11-19T09:37:59.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.037 [2024-11-19T09:37:59.829Z] =================================================================================================================== 00:10:10.037 [2024-11-19T09:37:59.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:10.037 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3782768 00:10:10.295 10:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:10.554 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:10.812 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:10.812 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 00:10:10.812 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:10.812 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:10.812 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:11.071 [2024-11-19 10:38:00.765894] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:11.071 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 00:10:11.329 request: 00:10:11.329 { 00:10:11.329 "uuid": "39f19b47-dbed-41c1-ba34-f3be7442acb1", 00:10:11.329 "method": "bdev_lvol_get_lvstores", 00:10:11.329 "req_id": 1 00:10:11.329 } 00:10:11.329 Got JSON-RPC error response 00:10:11.329 response: 00:10:11.329 { 00:10:11.329 "code": -19, 00:10:11.329 "message": "No such device" 00:10:11.329 } 00:10:11.329 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:11.329 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.329 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:11.329 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.329 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:11.588 aio_bdev 00:10:11.588 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8d37eaf6-b4f5-4a3e-9508-ea0c6acf6ef8 00:10:11.588 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8d37eaf6-b4f5-4a3e-9508-ea0c6acf6ef8 00:10:11.588 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.588 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:11.588 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.588 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.588 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:11.588 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8d37eaf6-b4f5-4a3e-9508-ea0c6acf6ef8 -t 2000 00:10:11.847 [ 00:10:11.847 { 00:10:11.847 "name": "8d37eaf6-b4f5-4a3e-9508-ea0c6acf6ef8", 00:10:11.847 "aliases": [ 00:10:11.847 "lvs/lvol" 00:10:11.847 ], 00:10:11.847 "product_name": "Logical Volume", 00:10:11.847 "block_size": 4096, 00:10:11.847 "num_blocks": 38912, 00:10:11.847 "uuid": "8d37eaf6-b4f5-4a3e-9508-ea0c6acf6ef8", 00:10:11.847 "assigned_rate_limits": { 00:10:11.847 "rw_ios_per_sec": 0, 00:10:11.847 "rw_mbytes_per_sec": 0, 00:10:11.847 "r_mbytes_per_sec": 0, 00:10:11.847 "w_mbytes_per_sec": 0 00:10:11.847 }, 00:10:11.847 "claimed": false, 00:10:11.847 "zoned": false, 00:10:11.847 "supported_io_types": { 00:10:11.847 "read": true, 00:10:11.847 "write": true, 00:10:11.847 "unmap": true, 00:10:11.847 "flush": false, 00:10:11.847 "reset": true, 00:10:11.847 "nvme_admin": false, 00:10:11.847 "nvme_io": false, 00:10:11.847 "nvme_io_md": false, 00:10:11.847 "write_zeroes": true, 00:10:11.847 "zcopy": false, 00:10:11.847 "get_zone_info": false, 00:10:11.847 "zone_management": false, 00:10:11.847 "zone_append": false, 00:10:11.847 "compare": false, 00:10:11.847 "compare_and_write": false, 00:10:11.847 "abort": false, 00:10:11.847 "seek_hole": true, 00:10:11.847 "seek_data": true, 00:10:11.847 "copy": false, 00:10:11.847 "nvme_iov_md": false 00:10:11.847 }, 00:10:11.847 "driver_specific": { 00:10:11.847 "lvol": { 00:10:11.847 "lvol_store_uuid": "39f19b47-dbed-41c1-ba34-f3be7442acb1", 00:10:11.847 "base_bdev": "aio_bdev", 00:10:11.847 "thin_provision": false, 00:10:11.847 "num_allocated_clusters": 38, 00:10:11.847 "snapshot": false, 00:10:11.847 "clone": false, 00:10:11.847 "esnap_clone": false 00:10:11.847 } 00:10:11.847 } 00:10:11.847 } 00:10:11.847 ] 00:10:11.847 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:11.847 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 00:10:11.847 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:12.106 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:12.106 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 00:10:12.106 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:12.365 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:12.365 10:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8d37eaf6-b4f5-4a3e-9508-ea0c6acf6ef8 00:10:12.365 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39f19b47-dbed-41c1-ba34-f3be7442acb1 00:10:12.623 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:12.882 00:10:12.882 real 0m16.294s 00:10:12.882 user 0m15.978s 00:10:12.882 sys 0m1.512s 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:12.882 ************************************ 00:10:12.882 END TEST lvs_grow_clean 00:10:12.882 ************************************ 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:12.882 ************************************ 00:10:12.882 START TEST lvs_grow_dirty 00:10:12.882 ************************************ 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:12.882 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:13.141 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:13.141 10:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:13.400 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0d86b107-da10-4703-a723-f880019d5c82 00:10:13.400 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:13.400 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:13.658 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:13.658 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:13.658 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0d86b107-da10-4703-a723-f880019d5c82 lvol 150 00:10:13.658 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=87bb6653-5a2c-4470-adc8-19e4ee493cda 00:10:13.658 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:13.658 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:13.917 [2024-11-19 10:38:03.544058] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:13.917 [2024-11-19 10:38:03.544107] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:13.917 true 00:10:13.917 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:13.917 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:14.175 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:14.175 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:14.175 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 87bb6653-5a2c-4470-adc8-19e4ee493cda 00:10:14.434 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:14.693 [2024-11-19 10:38:04.246207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.693 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:14.693 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3785597 00:10:14.693 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:14.693 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:14.693 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3785597 /var/tmp/bdevperf.sock 00:10:14.693 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3785597 ']' 00:10:14.693 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:14.693 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.693 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:14.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:14.693 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.693 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:14.951 [2024-11-19 10:38:04.493643] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:14.951 [2024-11-19 10:38:04.493689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785597 ] 00:10:14.951 [2024-11-19 10:38:04.550102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.951 [2024-11-19 10:38:04.589860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.951 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.951 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:14.951 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:15.518 Nvme0n1 00:10:15.518 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:15.518 [ 00:10:15.518 { 00:10:15.518 "name": "Nvme0n1", 00:10:15.518 "aliases": [ 00:10:15.518 "87bb6653-5a2c-4470-adc8-19e4ee493cda" 00:10:15.518 ], 00:10:15.518 "product_name": "NVMe disk", 00:10:15.518 "block_size": 4096, 00:10:15.518 "num_blocks": 38912, 00:10:15.518 "uuid": "87bb6653-5a2c-4470-adc8-19e4ee493cda", 00:10:15.518 "numa_id": 1, 00:10:15.518 "assigned_rate_limits": { 00:10:15.518 "rw_ios_per_sec": 0, 00:10:15.518 "rw_mbytes_per_sec": 0, 00:10:15.518 "r_mbytes_per_sec": 0, 00:10:15.518 "w_mbytes_per_sec": 0 00:10:15.518 }, 00:10:15.518 "claimed": false, 00:10:15.518 "zoned": false, 00:10:15.518 "supported_io_types": { 00:10:15.518 "read": true, 00:10:15.518 "write": true, 00:10:15.518 "unmap": true, 00:10:15.518 "flush": true, 00:10:15.518 "reset": true, 00:10:15.518 "nvme_admin": true, 00:10:15.518 "nvme_io": true, 00:10:15.518 "nvme_io_md": false, 00:10:15.518 "write_zeroes": true, 00:10:15.518 "zcopy": false, 00:10:15.518 "get_zone_info": false, 00:10:15.518 "zone_management": false, 00:10:15.518 "zone_append": false, 00:10:15.518 "compare": true, 00:10:15.518 "compare_and_write": true, 00:10:15.518 "abort": true, 00:10:15.518 "seek_hole": false, 00:10:15.518 "seek_data": false, 00:10:15.518 "copy": true, 00:10:15.518 "nvme_iov_md": false 00:10:15.518 }, 00:10:15.518 "memory_domains": [ 00:10:15.518 { 00:10:15.518 "dma_device_id": "system", 00:10:15.518 "dma_device_type": 1 00:10:15.518 } 00:10:15.518 ], 00:10:15.518 "driver_specific": { 00:10:15.518 "nvme": [ 00:10:15.518 { 00:10:15.518 "trid": { 00:10:15.518 "trtype": "TCP", 00:10:15.518 "adrfam": "IPv4", 00:10:15.518 "traddr": "10.0.0.2", 00:10:15.518 "trsvcid": "4420", 00:10:15.518 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:15.518 }, 00:10:15.518 "ctrlr_data": { 00:10:15.518 "cntlid": 1, 00:10:15.518 "vendor_id": "0x8086", 00:10:15.518 "model_number": "SPDK bdev Controller", 00:10:15.518 "serial_number": "SPDK0", 00:10:15.518 "firmware_revision": "25.01", 00:10:15.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:15.518 "oacs": { 00:10:15.518 "security": 0, 00:10:15.518 "format": 0, 00:10:15.518 "firmware": 0, 00:10:15.518 "ns_manage": 0 00:10:15.518 }, 00:10:15.518 "multi_ctrlr": true, 00:10:15.518 "ana_reporting": false 00:10:15.518 }, 00:10:15.518 "vs": { 00:10:15.518 "nvme_version": "1.3" 00:10:15.518 }, 00:10:15.518 "ns_data": { 00:10:15.518 "id": 1, 00:10:15.518 "can_share": true 00:10:15.518 } 00:10:15.518 } 00:10:15.518 ], 00:10:15.518 "mp_policy": "active_passive" 00:10:15.518 } 00:10:15.518 } 00:10:15.518 ] 00:10:15.519 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3785816 00:10:15.519 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:15.519 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:15.777 Running I/O for 10 seconds... 00:10:16.712 Latency(us) 00:10:16.712 [2024-11-19T09:38:06.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.712 Nvme0n1 : 1.00 23308.00 91.05 0.00 0.00 0.00 0.00 0.00 00:10:16.712 [2024-11-19T09:38:06.504Z] =================================================================================================================== 00:10:16.712 [2024-11-19T09:38:06.504Z] Total : 23308.00 91.05 0.00 0.00 0.00 0.00 0.00 00:10:16.712 00:10:17.648 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:17.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.648 Nvme0n1 : 2.00 23465.00 91.66 0.00 0.00 0.00 0.00 0.00 00:10:17.648 [2024-11-19T09:38:07.440Z] =================================================================================================================== 00:10:17.648 [2024-11-19T09:38:07.440Z] Total : 23465.00 91.66 0.00 0.00 0.00 0.00 0.00 00:10:17.648 00:10:17.906 true 00:10:17.906 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:17.906 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:18.164 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:18.164 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:18.164 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3785816 00:10:18.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.732 Nvme0n1 : 3.00 23521.67 91.88 0.00 0.00 0.00 0.00 0.00 00:10:18.732 [2024-11-19T09:38:08.524Z] =================================================================================================================== 00:10:18.732 [2024-11-19T09:38:08.524Z] Total : 23521.67 91.88 0.00 0.00 0.00 0.00 0.00 00:10:18.732 00:10:19.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.666 Nvme0n1 : 4.00 23595.75 92.17 0.00 0.00 0.00 0.00 0.00 00:10:19.666 [2024-11-19T09:38:09.458Z] =================================================================================================================== 00:10:19.666 [2024-11-19T09:38:09.458Z] Total : 23595.75 92.17 0.00 0.00 0.00 0.00 0.00 00:10:19.666 00:10:21.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.043 Nvme0n1 : 5.00 23609.60 92.22 0.00 0.00 0.00 0.00 0.00 00:10:21.043 [2024-11-19T09:38:10.835Z] =================================================================================================================== 00:10:21.043 [2024-11-19T09:38:10.835Z] Total : 23609.60 92.22 0.00 0.00 0.00 0.00 0.00 00:10:21.043 00:10:21.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.997 Nvme0n1 : 6.00 23623.33 92.28 0.00 0.00 0.00 0.00 0.00 00:10:21.997 [2024-11-19T09:38:11.789Z] =================================================================================================================== 00:10:21.997 [2024-11-19T09:38:11.789Z] Total : 23623.33 92.28 0.00 0.00 0.00 0.00 0.00 00:10:21.997 00:10:22.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.934 Nvme0n1 : 7.00 23647.86 92.37 0.00 0.00 0.00 0.00 0.00 00:10:22.934 [2024-11-19T09:38:12.726Z] =================================================================================================================== 00:10:22.934 [2024-11-19T09:38:12.726Z] Total : 23647.86 92.37 0.00 0.00 0.00 0.00 0.00 00:10:22.934 00:10:23.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.871 Nvme0n1 : 8.00 23679.75 92.50 0.00 0.00 0.00 0.00 0.00 00:10:23.871 [2024-11-19T09:38:13.663Z] =================================================================================================================== 00:10:23.871 [2024-11-19T09:38:13.663Z] Total : 23679.75 92.50 0.00 0.00 0.00 0.00 0.00 00:10:23.871 00:10:24.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.809 Nvme0n1 : 9.00 23695.11 92.56 0.00 0.00 0.00 0.00 0.00 00:10:24.809 [2024-11-19T09:38:14.601Z] =================================================================================================================== 00:10:24.809 [2024-11-19T09:38:14.601Z] Total : 23695.11 92.56 0.00 0.00 0.00 0.00 0.00 00:10:24.809 00:10:25.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.746 Nvme0n1 : 10.00 23713.40 92.63 0.00 0.00 0.00 0.00 0.00 00:10:25.746 [2024-11-19T09:38:15.539Z] =================================================================================================================== 00:10:25.747 [2024-11-19T09:38:15.539Z] Total : 23713.40 92.63 0.00 0.00 0.00 0.00 0.00 00:10:25.747 00:10:25.747 00:10:25.747 Latency(us) 00:10:25.747 [2024-11-19T09:38:15.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.747 Nvme0n1 : 10.01 23718.75 92.65 0.00 0.00 5393.21 2964.72 10673.01 00:10:25.747 [2024-11-19T09:38:15.539Z] =================================================================================================================== 00:10:25.747 [2024-11-19T09:38:15.539Z] Total : 23718.75 92.65 0.00 0.00 5393.21 2964.72 10673.01 00:10:25.747 { 00:10:25.747 "results": [ 00:10:25.747 { 00:10:25.747 "job": "Nvme0n1", 00:10:25.747 "core_mask": "0x2", 00:10:25.747 "workload": "randwrite", 00:10:25.747 "status": "finished", 00:10:25.747 "queue_depth": 128, 00:10:25.747 "io_size": 4096, 00:10:25.747 "runtime": 10.005795, 00:10:25.747 "iops": 23718.754981488226, 00:10:25.747 "mibps": 92.65138664643838, 00:10:25.747 "io_failed": 0, 00:10:25.747 "io_timeout": 0, 00:10:25.747 "avg_latency_us": 5393.208609884979, 00:10:25.747 "min_latency_us": 2964.7238095238095, 00:10:25.747 "max_latency_us": 10673.005714285715 00:10:25.747 } 00:10:25.747 ], 00:10:25.747 "core_count": 1 00:10:25.747 } 00:10:25.747 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3785597 00:10:25.747 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3785597 ']' 00:10:25.747 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3785597 00:10:25.747 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:25.747 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.747 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3785597 00:10:25.747 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:25.747 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:25.747 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3785597' 00:10:25.747 killing process with pid 3785597 00:10:25.747 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3785597 00:10:25.747 Received shutdown signal, test time was about 10.000000 seconds 00:10:25.747 00:10:25.747 Latency(us) 00:10:25.747 [2024-11-19T09:38:15.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.747 [2024-11-19T09:38:15.539Z] =================================================================================================================== 00:10:25.747 [2024-11-19T09:38:15.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:25.747 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3785597 00:10:26.006 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:26.265 10:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:26.265 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:26.265 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3782379 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3782379 00:10:26.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3782379 Killed "${NVMF_APP[@]}" "$@" 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3787606 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3787606 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3787606 ']' 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.524 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:26.783 [2024-11-19 10:38:16.340495] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:26.783 [2024-11-19 10:38:16.340541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.783 [2024-11-19 10:38:16.417624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.783 [2024-11-19 10:38:16.457878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.783 [2024-11-19 10:38:16.457912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.783 [2024-11-19 10:38:16.457918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.783 [2024-11-19 10:38:16.457924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.783 [2024-11-19 10:38:16.457929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.783 [2024-11-19 10:38:16.458505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.783 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.783 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:26.783 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:26.783 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:26.783 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:27.042 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.042 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:27.042 [2024-11-19 10:38:16.755399] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:27.042 [2024-11-19 10:38:16.755494] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:27.042 [2024-11-19 10:38:16.755518] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:27.042 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:27.042 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 87bb6653-5a2c-4470-adc8-19e4ee493cda 00:10:27.042 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=87bb6653-5a2c-4470-adc8-19e4ee493cda 00:10:27.042 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.042 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:27.042 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.042 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.042 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:27.300 10:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 87bb6653-5a2c-4470-adc8-19e4ee493cda -t 2000 00:10:27.560 [ 00:10:27.560 { 00:10:27.560 "name": "87bb6653-5a2c-4470-adc8-19e4ee493cda", 00:10:27.560 "aliases": [ 00:10:27.560 "lvs/lvol" 00:10:27.560 ], 00:10:27.560 "product_name": "Logical Volume", 00:10:27.560 "block_size": 4096, 00:10:27.560 "num_blocks": 38912, 00:10:27.560 "uuid": "87bb6653-5a2c-4470-adc8-19e4ee493cda", 00:10:27.560 "assigned_rate_limits": { 00:10:27.560 "rw_ios_per_sec": 0, 00:10:27.560 "rw_mbytes_per_sec": 0, 00:10:27.560 "r_mbytes_per_sec": 0, 00:10:27.560 "w_mbytes_per_sec": 0 00:10:27.560 }, 00:10:27.560 "claimed": false, 00:10:27.560 "zoned": false, 00:10:27.560 "supported_io_types": { 00:10:27.560 "read": true, 00:10:27.560 "write": true, 00:10:27.560 "unmap": true, 00:10:27.560 "flush": false, 00:10:27.560 "reset": true, 00:10:27.560 "nvme_admin": false, 00:10:27.560 "nvme_io": false, 00:10:27.560 "nvme_io_md": false, 00:10:27.560 "write_zeroes": true, 00:10:27.560 "zcopy": false, 00:10:27.560 "get_zone_info": false, 00:10:27.560 "zone_management": false, 00:10:27.560 "zone_append": false, 00:10:27.560 "compare": false, 00:10:27.560 "compare_and_write": false, 00:10:27.560 "abort": false, 00:10:27.560 "seek_hole": true, 00:10:27.560 "seek_data": true, 00:10:27.560 "copy": false, 00:10:27.560 "nvme_iov_md": false 00:10:27.560 }, 00:10:27.560 "driver_specific": { 00:10:27.560 "lvol": { 00:10:27.560 "lvol_store_uuid": "0d86b107-da10-4703-a723-f880019d5c82", 00:10:27.560 "base_bdev": "aio_bdev", 00:10:27.560 "thin_provision": false, 00:10:27.560 "num_allocated_clusters": 38, 00:10:27.560 "snapshot": false, 00:10:27.560 "clone": false, 00:10:27.560 "esnap_clone": false 00:10:27.560 } 00:10:27.560 } 00:10:27.560 } 00:10:27.560 ] 00:10:27.560 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:27.560 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:27.560 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:27.560 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:27.560 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:27.560 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:27.819 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:27.819 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:28.078 [2024-11-19 10:38:17.700457] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:28.078 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:28.078 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:28.078 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:28.078 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.078 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.078 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.078 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.078 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.078 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.078 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.078 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:28.079 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:28.338 request: 00:10:28.338 { 00:10:28.338 "uuid": "0d86b107-da10-4703-a723-f880019d5c82", 00:10:28.338 "method": "bdev_lvol_get_lvstores", 00:10:28.338 "req_id": 1 00:10:28.338 } 00:10:28.338 Got JSON-RPC error response 00:10:28.338 response: 00:10:28.338 { 00:10:28.338 "code": -19, 00:10:28.338 "message": "No such device" 00:10:28.338 } 00:10:28.338 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:28.338 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:28.338 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:28.338 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:28.338 10:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:28.338 aio_bdev 00:10:28.338 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 87bb6653-5a2c-4470-adc8-19e4ee493cda 00:10:28.338 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=87bb6653-5a2c-4470-adc8-19e4ee493cda 00:10:28.338 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.338 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:28.338 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.338 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.338 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:28.597 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 87bb6653-5a2c-4470-adc8-19e4ee493cda -t 2000 00:10:28.855 [ 00:10:28.855 { 00:10:28.855 "name": "87bb6653-5a2c-4470-adc8-19e4ee493cda", 00:10:28.855 "aliases": [ 00:10:28.855 "lvs/lvol" 00:10:28.855 ], 00:10:28.855 "product_name": "Logical Volume", 00:10:28.855 "block_size": 4096, 00:10:28.855 "num_blocks": 38912, 00:10:28.855 "uuid": "87bb6653-5a2c-4470-adc8-19e4ee493cda", 00:10:28.855 "assigned_rate_limits": { 00:10:28.855 "rw_ios_per_sec": 0, 00:10:28.855 "rw_mbytes_per_sec": 0, 00:10:28.855 "r_mbytes_per_sec": 0, 00:10:28.855 "w_mbytes_per_sec": 0 00:10:28.855 }, 00:10:28.855 "claimed": false, 00:10:28.855 "zoned": false, 00:10:28.855 "supported_io_types": { 00:10:28.855 "read": true, 00:10:28.855 "write": true, 00:10:28.855 "unmap": true, 00:10:28.855 "flush": false, 00:10:28.855 "reset": true, 00:10:28.855 "nvme_admin": false, 00:10:28.855 "nvme_io": false, 00:10:28.855 "nvme_io_md": false, 00:10:28.855 "write_zeroes": true, 00:10:28.855 "zcopy": false, 00:10:28.855 "get_zone_info": false, 00:10:28.855 "zone_management": false, 00:10:28.855 "zone_append": false, 00:10:28.855 "compare": false, 00:10:28.855 "compare_and_write": false, 00:10:28.855 "abort": false, 00:10:28.855 "seek_hole": true, 00:10:28.855 "seek_data": true, 00:10:28.855 "copy": false, 00:10:28.855 "nvme_iov_md": false 00:10:28.855 }, 00:10:28.855 "driver_specific": { 00:10:28.855 "lvol": { 00:10:28.855 "lvol_store_uuid": "0d86b107-da10-4703-a723-f880019d5c82", 00:10:28.855 "base_bdev": "aio_bdev", 00:10:28.855 "thin_provision": false, 00:10:28.855 "num_allocated_clusters": 38, 00:10:28.855 "snapshot": false, 00:10:28.855 "clone": false, 00:10:28.855 "esnap_clone": false 00:10:28.855 } 00:10:28.855 } 00:10:28.855 } 00:10:28.855 ] 00:10:28.855 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:28.855 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:28.855 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:28.855 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:28.855 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:28.855 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:29.114 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:29.114 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 87bb6653-5a2c-4470-adc8-19e4ee493cda 00:10:29.372 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0d86b107-da10-4703-a723-f880019d5c82 00:10:29.631 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:29.631 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:29.631 00:10:29.631 real 0m16.817s 00:10:29.631 user 0m44.224s 00:10:29.631 sys 0m3.846s 00:10:29.631 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.631 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:29.631 ************************************ 00:10:29.631 END TEST lvs_grow_dirty 00:10:29.631 ************************************ 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:29.889 nvmf_trace.0 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.889 rmmod nvme_tcp 00:10:29.889 rmmod nvme_fabrics 00:10:29.889 rmmod nvme_keyring 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3787606 ']' 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3787606 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3787606 ']' 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3787606 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3787606 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.889 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.890 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3787606' 00:10:29.890 killing process with pid 3787606 00:10:29.890 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3787606 00:10:29.890 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3787606 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.149 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.054 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:32.313 00:10:32.313 real 0m42.415s 00:10:32.313 user 1m5.781s 00:10:32.313 sys 0m10.332s 00:10:32.313 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.313 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:32.313 ************************************ 00:10:32.313 END TEST nvmf_lvs_grow 00:10:32.313 ************************************ 00:10:32.313 10:38:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:32.313 10:38:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.313 10:38:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.313 10:38:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.313 ************************************ 00:10:32.313 START TEST nvmf_bdev_io_wait 00:10:32.313 ************************************ 00:10:32.313 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:32.313 * Looking for test storage... 00:10:32.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.313 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:32.313 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:32.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.314 --rc genhtml_branch_coverage=1 00:10:32.314 --rc genhtml_function_coverage=1 00:10:32.314 --rc genhtml_legend=1 00:10:32.314 --rc geninfo_all_blocks=1 00:10:32.314 --rc geninfo_unexecuted_blocks=1 00:10:32.314 00:10:32.314 ' 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:32.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.314 --rc genhtml_branch_coverage=1 00:10:32.314 --rc genhtml_function_coverage=1 00:10:32.314 --rc genhtml_legend=1 00:10:32.314 --rc geninfo_all_blocks=1 00:10:32.314 --rc geninfo_unexecuted_blocks=1 00:10:32.314 00:10:32.314 ' 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:32.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.314 --rc genhtml_branch_coverage=1 00:10:32.314 --rc genhtml_function_coverage=1 00:10:32.314 --rc genhtml_legend=1 00:10:32.314 --rc geninfo_all_blocks=1 00:10:32.314 --rc geninfo_unexecuted_blocks=1 00:10:32.314 00:10:32.314 ' 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:32.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.314 --rc genhtml_branch_coverage=1 00:10:32.314 --rc genhtml_function_coverage=1 00:10:32.314 --rc genhtml_legend=1 00:10:32.314 --rc geninfo_all_blocks=1 00:10:32.314 --rc geninfo_unexecuted_blocks=1 00:10:32.314 00:10:32.314 ' 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.314 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.574 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:32.575 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:39.144 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:39.144 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:39.144 Found net devices under 0000:86:00.0: cvl_0_0 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:39.144 Found net devices under 0000:86:00.1: cvl_0_1 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.144 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.145 10:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:10:39.145 00:10:39.145 --- 10.0.0.2 ping statistics --- 00:10:39.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.145 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:10:39.145 00:10:39.145 --- 10.0.0.1 ping statistics --- 00:10:39.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.145 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3791737 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3791737 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3791737 ']' 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.145 10:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.145 [2024-11-19 10:38:28.208436] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:39.145 [2024-11-19 10:38:28.208490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.145 [2024-11-19 10:38:28.289119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.145 [2024-11-19 10:38:28.332549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.145 [2024-11-19 10:38:28.332584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.145 [2024-11-19 10:38:28.332590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.145 [2024-11-19 10:38:28.332596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.145 [2024-11-19 10:38:28.332602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.145 [2024-11-19 10:38:28.334097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.145 [2024-11-19 10:38:28.334224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.145 [2024-11-19 10:38:28.334296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.145 [2024-11-19 10:38:28.334297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.404 [2024-11-19 10:38:29.167797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.404 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.665 Malloc0 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.665 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.665 [2024-11-19 10:38:29.222964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3791987 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.666 { 00:10:39.666 "params": { 00:10:39.666 "name": "Nvme$subsystem", 00:10:39.666 "trtype": "$TEST_TRANSPORT", 00:10:39.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.666 "adrfam": "ipv4", 00:10:39.666 "trsvcid": "$NVMF_PORT", 00:10:39.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.666 "hdgst": ${hdgst:-false}, 00:10:39.666 "ddgst": ${ddgst:-false} 00:10:39.666 }, 00:10:39.666 "method": "bdev_nvme_attach_controller" 00:10:39.666 } 00:10:39.666 EOF 00:10:39.666 )") 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3791989 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3791992 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.666 { 00:10:39.666 "params": { 00:10:39.666 "name": "Nvme$subsystem", 00:10:39.666 "trtype": "$TEST_TRANSPORT", 00:10:39.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.666 "adrfam": "ipv4", 00:10:39.666 "trsvcid": "$NVMF_PORT", 00:10:39.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.666 "hdgst": ${hdgst:-false}, 00:10:39.666 "ddgst": ${ddgst:-false} 00:10:39.666 }, 00:10:39.666 "method": "bdev_nvme_attach_controller" 00:10:39.666 } 00:10:39.666 EOF 00:10:39.666 )") 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3791994 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.666 { 00:10:39.666 "params": { 00:10:39.666 "name": "Nvme$subsystem", 00:10:39.666 "trtype": "$TEST_TRANSPORT", 00:10:39.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.666 "adrfam": "ipv4", 00:10:39.666 "trsvcid": "$NVMF_PORT", 00:10:39.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.666 "hdgst": ${hdgst:-false}, 00:10:39.666 "ddgst": ${ddgst:-false} 00:10:39.666 }, 00:10:39.666 "method": "bdev_nvme_attach_controller" 00:10:39.666 } 00:10:39.666 EOF 00:10:39.666 )") 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.666 { 00:10:39.666 "params": { 00:10:39.666 "name": "Nvme$subsystem", 00:10:39.666 "trtype": "$TEST_TRANSPORT", 00:10:39.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.666 "adrfam": "ipv4", 00:10:39.666 "trsvcid": "$NVMF_PORT", 00:10:39.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.666 "hdgst": ${hdgst:-false}, 00:10:39.666 "ddgst": ${ddgst:-false} 00:10:39.666 }, 00:10:39.666 "method": "bdev_nvme_attach_controller" 00:10:39.666 } 00:10:39.666 EOF 00:10:39.666 )") 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3791987 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.666 "params": { 00:10:39.666 "name": "Nvme1", 00:10:39.666 "trtype": "tcp", 00:10:39.666 "traddr": "10.0.0.2", 00:10:39.666 "adrfam": "ipv4", 00:10:39.666 "trsvcid": "4420", 00:10:39.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.666 "hdgst": false, 00:10:39.666 "ddgst": false 00:10:39.666 }, 00:10:39.666 "method": "bdev_nvme_attach_controller" 00:10:39.666 }' 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.666 "params": { 00:10:39.666 "name": "Nvme1", 00:10:39.666 "trtype": "tcp", 00:10:39.666 "traddr": "10.0.0.2", 00:10:39.666 "adrfam": "ipv4", 00:10:39.666 "trsvcid": "4420", 00:10:39.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.666 "hdgst": false, 00:10:39.666 "ddgst": false 00:10:39.666 }, 00:10:39.666 "method": "bdev_nvme_attach_controller" 00:10:39.666 }' 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.666 "params": { 00:10:39.666 "name": "Nvme1", 00:10:39.666 "trtype": "tcp", 00:10:39.666 "traddr": "10.0.0.2", 00:10:39.666 "adrfam": "ipv4", 00:10:39.666 "trsvcid": "4420", 00:10:39.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.666 "hdgst": false, 00:10:39.666 "ddgst": false 00:10:39.666 }, 00:10:39.666 "method": "bdev_nvme_attach_controller" 00:10:39.666 }' 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:39.666 10:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.666 "params": { 00:10:39.666 "name": "Nvme1", 00:10:39.666 "trtype": "tcp", 00:10:39.666 "traddr": "10.0.0.2", 00:10:39.666 "adrfam": "ipv4", 00:10:39.666 "trsvcid": "4420", 00:10:39.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.666 "hdgst": false, 00:10:39.666 "ddgst": false 00:10:39.666 }, 00:10:39.666 "method": "bdev_nvme_attach_controller" 00:10:39.666 }' 00:10:39.666 [2024-11-19 10:38:29.276617] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:39.666 [2024-11-19 10:38:29.276669] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:39.666 [2024-11-19 10:38:29.277105] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:39.666 [2024-11-19 10:38:29.277109] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:39.666 [2024-11-19 10:38:29.277148] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 10:38:29.277149] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:39.667 --proc-type=auto ] 00:10:39.667 [2024-11-19 10:38:29.277372] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:39.667 [2024-11-19 10:38:29.277414] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:39.955 [2024-11-19 10:38:29.471798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.955 [2024-11-19 10:38:29.514707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:39.955 [2024-11-19 10:38:29.572094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.955 [2024-11-19 10:38:29.631625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:39.955 [2024-11-19 10:38:29.633104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.955 [2024-11-19 10:38:29.675446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:39.955 [2024-11-19 10:38:29.693530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.955 [2024-11-19 10:38:29.733134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:40.262 Running I/O for 1 seconds... 00:10:40.262 Running I/O for 1 seconds... 00:10:40.262 Running I/O for 1 seconds... 00:10:40.262 Running I/O for 1 seconds... 00:10:41.243 250136.00 IOPS, 977.09 MiB/s 00:10:41.243 Latency(us) 00:10:41.243 [2024-11-19T09:38:31.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.244 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:41.244 Nvme1n1 : 1.00 249754.13 975.60 0.00 0.00 510.10 223.33 1497.97 00:10:41.244 [2024-11-19T09:38:31.036Z] =================================================================================================================== 00:10:41.244 [2024-11-19T09:38:31.036Z] Total : 249754.13 975.60 0.00 0.00 510.10 223.33 1497.97 00:10:41.244 11759.00 IOPS, 45.93 MiB/s 00:10:41.244 Latency(us) 00:10:41.244 [2024-11-19T09:38:31.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.244 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:41.244 Nvme1n1 : 1.01 11822.11 46.18 0.00 0.00 10792.27 5149.26 16352.79 00:10:41.244 [2024-11-19T09:38:31.036Z] =================================================================================================================== 00:10:41.244 [2024-11-19T09:38:31.036Z] Total : 11822.11 46.18 0.00 0.00 10792.27 5149.26 16352.79 00:10:41.244 9737.00 IOPS, 38.04 MiB/s 00:10:41.244 Latency(us) 00:10:41.244 [2024-11-19T09:38:31.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.244 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:41.244 Nvme1n1 : 1.01 9796.61 38.27 0.00 0.00 13016.34 6116.69 21470.84 00:10:41.244 [2024-11-19T09:38:31.036Z] =================================================================================================================== 00:10:41.244 [2024-11-19T09:38:31.036Z] Total : 9796.61 38.27 0.00 0.00 13016.34 6116.69 21470.84 00:10:41.244 10882.00 IOPS, 42.51 MiB/s 00:10:41.244 Latency(us) 00:10:41.244 [2024-11-19T09:38:31.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.244 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:41.244 Nvme1n1 : 1.00 10966.37 42.84 0.00 0.00 11642.81 3276.80 20597.03 00:10:41.244 [2024-11-19T09:38:31.036Z] =================================================================================================================== 00:10:41.244 [2024-11-19T09:38:31.036Z] Total : 10966.37 42.84 0.00 0.00 11642.81 3276.80 20597.03 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3791989 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3791992 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3791994 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.503 rmmod nvme_tcp 00:10:41.503 rmmod nvme_fabrics 00:10:41.503 rmmod nvme_keyring 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3791737 ']' 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3791737 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3791737 ']' 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3791737 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3791737 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3791737' 00:10:41.503 killing process with pid 3791737 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3791737 00:10:41.503 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3791737 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.762 10:38:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.296 00:10:44.296 real 0m11.554s 00:10:44.296 user 0m19.346s 00:10:44.296 sys 0m6.285s 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:44.296 ************************************ 00:10:44.296 END TEST nvmf_bdev_io_wait 00:10:44.296 ************************************ 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.296 ************************************ 00:10:44.296 START TEST nvmf_queue_depth 00:10:44.296 ************************************ 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:44.296 * Looking for test storage... 00:10:44.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.296 --rc genhtml_branch_coverage=1 00:10:44.296 --rc genhtml_function_coverage=1 00:10:44.296 --rc genhtml_legend=1 00:10:44.296 --rc geninfo_all_blocks=1 00:10:44.296 --rc geninfo_unexecuted_blocks=1 00:10:44.296 00:10:44.296 ' 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.296 --rc genhtml_branch_coverage=1 00:10:44.296 --rc genhtml_function_coverage=1 00:10:44.296 --rc genhtml_legend=1 00:10:44.296 --rc geninfo_all_blocks=1 00:10:44.296 --rc geninfo_unexecuted_blocks=1 00:10:44.296 00:10:44.296 ' 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.296 --rc genhtml_branch_coverage=1 00:10:44.296 --rc genhtml_function_coverage=1 00:10:44.296 --rc genhtml_legend=1 00:10:44.296 --rc geninfo_all_blocks=1 00:10:44.296 --rc geninfo_unexecuted_blocks=1 00:10:44.296 00:10:44.296 ' 00:10:44.296 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.296 --rc genhtml_branch_coverage=1 00:10:44.296 --rc genhtml_function_coverage=1 00:10:44.296 --rc genhtml_legend=1 00:10:44.296 --rc geninfo_all_blocks=1 00:10:44.297 --rc geninfo_unexecuted_blocks=1 00:10:44.297 00:10:44.297 ' 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.297 10:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:50.875 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:50.875 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.875 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:50.876 Found net devices under 0000:86:00.0: cvl_0_0 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:50.876 Found net devices under 0000:86:00.1: cvl_0_1 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:10:50.876 00:10:50.876 --- 10.0.0.2 ping statistics --- 00:10:50.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.876 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:10:50.876 00:10:50.876 --- 10.0.0.1 ping statistics --- 00:10:50.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.876 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3795805 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3795805 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3795805 ']' 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.876 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.876 [2024-11-19 10:38:39.814139] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:50.876 [2024-11-19 10:38:39.814183] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.876 [2024-11-19 10:38:39.894052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.876 [2024-11-19 10:38:39.934753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.877 [2024-11-19 10:38:39.934791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.877 [2024-11-19 10:38:39.934798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.877 [2024-11-19 10:38:39.934804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.877 [2024-11-19 10:38:39.934810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.877 [2024-11-19 10:38:39.935378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.877 [2024-11-19 10:38:40.070867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.877 Malloc0 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.877 [2024-11-19 10:38:40.120908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3795952 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3795952 /var/tmp/bdevperf.sock 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3795952 ']' 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:50.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.877 [2024-11-19 10:38:40.173052] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:50.877 [2024-11-19 10:38:40.173094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795952 ] 00:10:50.877 [2024-11-19 10:38:40.248224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.877 [2024-11-19 10:38:40.292065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.877 NVMe0n1 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.877 10:38:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:50.877 Running I/O for 10 seconds... 00:10:53.188 11910.00 IOPS, 46.52 MiB/s [2024-11-19T09:38:43.916Z] 12228.50 IOPS, 47.77 MiB/s [2024-11-19T09:38:44.853Z] 12289.00 IOPS, 48.00 MiB/s [2024-11-19T09:38:45.789Z] 12294.00 IOPS, 48.02 MiB/s [2024-11-19T09:38:46.726Z] 12366.20 IOPS, 48.31 MiB/s [2024-11-19T09:38:47.662Z] 12445.83 IOPS, 48.62 MiB/s [2024-11-19T09:38:48.599Z] 12447.86 IOPS, 48.62 MiB/s [2024-11-19T09:38:49.977Z] 12480.88 IOPS, 48.75 MiB/s [2024-11-19T09:38:50.913Z] 12506.67 IOPS, 48.85 MiB/s [2024-11-19T09:38:50.913Z] 12516.10 IOPS, 48.89 MiB/s 00:11:01.121 Latency(us) 00:11:01.121 [2024-11-19T09:38:50.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.121 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:01.121 Verification LBA range: start 0x0 length 0x4000 00:11:01.121 NVMe0n1 : 10.05 12551.27 49.03 0.00 0.00 81287.88 13856.18 52928.12 00:11:01.121 [2024-11-19T09:38:50.913Z] =================================================================================================================== 00:11:01.121 [2024-11-19T09:38:50.913Z] Total : 12551.27 49.03 0.00 0.00 81287.88 13856.18 52928.12 00:11:01.121 { 00:11:01.121 "results": [ 00:11:01.121 { 00:11:01.121 "job": "NVMe0n1", 00:11:01.121 "core_mask": "0x1", 00:11:01.121 "workload": "verify", 00:11:01.121 "status": "finished", 00:11:01.121 "verify_range": { 00:11:01.121 "start": 0, 00:11:01.121 "length": 16384 00:11:01.121 }, 00:11:01.121 "queue_depth": 1024, 00:11:01.121 "io_size": 4096, 00:11:01.121 "runtime": 10.053561, 00:11:01.121 "iops": 12551.274120682214, 00:11:01.121 "mibps": 49.0284145339149, 00:11:01.121 "io_failed": 0, 00:11:01.121 "io_timeout": 0, 00:11:01.121 "avg_latency_us": 81287.88106339709, 00:11:01.121 "min_latency_us": 13856.182857142858, 00:11:01.121 "max_latency_us": 52928.1219047619 00:11:01.121 } 00:11:01.121 ], 00:11:01.121 "core_count": 1 00:11:01.121 } 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3795952 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3795952 ']' 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3795952 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3795952 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3795952' 00:11:01.121 killing process with pid 3795952 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3795952 00:11:01.121 Received shutdown signal, test time was about 10.000000 seconds 00:11:01.121 00:11:01.121 Latency(us) 00:11:01.121 [2024-11-19T09:38:50.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.121 [2024-11-19T09:38:50.913Z] =================================================================================================================== 00:11:01.121 [2024-11-19T09:38:50.913Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3795952 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.121 rmmod nvme_tcp 00:11:01.121 rmmod nvme_fabrics 00:11:01.121 rmmod nvme_keyring 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3795805 ']' 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3795805 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3795805 ']' 00:11:01.121 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3795805 00:11:01.381 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:01.381 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.381 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3795805 00:11:01.381 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:01.381 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:01.381 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3795805' 00:11:01.381 killing process with pid 3795805 00:11:01.381 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3795805 00:11:01.381 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3795805 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.381 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.916 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:03.916 00:11:03.916 real 0m19.661s 00:11:03.916 user 0m22.868s 00:11:03.916 sys 0m6.106s 00:11:03.916 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.916 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:03.916 ************************************ 00:11:03.916 END TEST nvmf_queue_depth 00:11:03.916 ************************************ 00:11:03.916 10:38:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:03.916 10:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.916 10:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.916 10:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.916 ************************************ 00:11:03.916 START TEST nvmf_target_multipath 00:11:03.916 ************************************ 00:11:03.916 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:03.916 * Looking for test storage... 00:11:03.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.916 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:03.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.917 --rc genhtml_branch_coverage=1 00:11:03.917 --rc genhtml_function_coverage=1 00:11:03.917 --rc genhtml_legend=1 00:11:03.917 --rc geninfo_all_blocks=1 00:11:03.917 --rc geninfo_unexecuted_blocks=1 00:11:03.917 00:11:03.917 ' 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:03.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.917 --rc genhtml_branch_coverage=1 00:11:03.917 --rc genhtml_function_coverage=1 00:11:03.917 --rc genhtml_legend=1 00:11:03.917 --rc geninfo_all_blocks=1 00:11:03.917 --rc geninfo_unexecuted_blocks=1 00:11:03.917 00:11:03.917 ' 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:03.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.917 --rc genhtml_branch_coverage=1 00:11:03.917 --rc genhtml_function_coverage=1 00:11:03.917 --rc genhtml_legend=1 00:11:03.917 --rc geninfo_all_blocks=1 00:11:03.917 --rc geninfo_unexecuted_blocks=1 00:11:03.917 00:11:03.917 ' 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:03.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.917 --rc genhtml_branch_coverage=1 00:11:03.917 --rc genhtml_function_coverage=1 00:11:03.917 --rc genhtml_legend=1 00:11:03.917 --rc geninfo_all_blocks=1 00:11:03.917 --rc geninfo_unexecuted_blocks=1 00:11:03.917 00:11:03.917 ' 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.917 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.918 10:38:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:10.486 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:10.486 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:10.486 Found net devices under 0000:86:00.0: cvl_0_0 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.486 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:10.487 Found net devices under 0000:86:00.1: cvl_0_1 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:10.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:11:10.487 00:11:10.487 --- 10.0.0.2 ping statistics --- 00:11:10.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.487 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:11:10.487 00:11:10.487 --- 10.0.0.1 ping statistics --- 00:11:10.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.487 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:10.487 only one NIC for nvmf test 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.487 rmmod nvme_tcp 00:11:10.487 rmmod nvme_fabrics 00:11:10.487 rmmod nvme_keyring 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.487 10:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.392 00:11:12.392 real 0m8.437s 00:11:12.392 user 0m1.837s 00:11:12.392 sys 0m4.605s 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:12.392 ************************************ 00:11:12.392 END TEST nvmf_target_multipath 00:11:12.392 ************************************ 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:12.392 ************************************ 00:11:12.392 START TEST nvmf_zcopy 00:11:12.392 ************************************ 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:12.392 * Looking for test storage... 00:11:12.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.392 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:12.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.393 --rc genhtml_branch_coverage=1 00:11:12.393 --rc genhtml_function_coverage=1 00:11:12.393 --rc genhtml_legend=1 00:11:12.393 --rc geninfo_all_blocks=1 00:11:12.393 --rc geninfo_unexecuted_blocks=1 00:11:12.393 00:11:12.393 ' 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:12.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.393 --rc genhtml_branch_coverage=1 00:11:12.393 --rc genhtml_function_coverage=1 00:11:12.393 --rc genhtml_legend=1 00:11:12.393 --rc geninfo_all_blocks=1 00:11:12.393 --rc geninfo_unexecuted_blocks=1 00:11:12.393 00:11:12.393 ' 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:12.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.393 --rc genhtml_branch_coverage=1 00:11:12.393 --rc genhtml_function_coverage=1 00:11:12.393 --rc genhtml_legend=1 00:11:12.393 --rc geninfo_all_blocks=1 00:11:12.393 --rc geninfo_unexecuted_blocks=1 00:11:12.393 00:11:12.393 ' 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:12.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.393 --rc genhtml_branch_coverage=1 00:11:12.393 --rc genhtml_function_coverage=1 00:11:12.393 --rc genhtml_legend=1 00:11:12.393 --rc geninfo_all_blocks=1 00:11:12.393 --rc geninfo_unexecuted_blocks=1 00:11:12.393 00:11:12.393 ' 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.393 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.393 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:18.960 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.960 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:18.961 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:18.961 Found net devices under 0000:86:00.0: cvl_0_0 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:18.961 Found net devices under 0000:86:00.1: cvl_0_1 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:11:18.961 00:11:18.961 --- 10.0.0.2 ping statistics --- 00:11:18.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.961 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:18.961 00:11:18.961 --- 10.0.0.1 ping statistics --- 00:11:18.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.961 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.961 10:39:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3804920 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3804920 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3804920 ']' 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.961 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.961 [2024-11-19 10:39:08.068984] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:11:18.961 [2024-11-19 10:39:08.069031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.961 [2024-11-19 10:39:08.148889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.961 [2024-11-19 10:39:08.192148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.961 [2024-11-19 10:39:08.192183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.961 [2024-11-19 10:39:08.192191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.961 [2024-11-19 10:39:08.192197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.961 [2024-11-19 10:39:08.192207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.961 [2024-11-19 10:39:08.192773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 [2024-11-19 10:39:08.942853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 [2024-11-19 10:39:08.963030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 malloc0 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 10:39:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 10:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:19.220 10:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:19.220 10:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:19.220 10:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:19.220 10:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:19.220 10:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:19.220 { 00:11:19.220 "params": { 00:11:19.220 "name": "Nvme$subsystem", 00:11:19.220 "trtype": "$TEST_TRANSPORT", 00:11:19.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:19.220 "adrfam": "ipv4", 00:11:19.220 "trsvcid": "$NVMF_PORT", 00:11:19.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:19.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:19.220 "hdgst": ${hdgst:-false}, 00:11:19.220 "ddgst": ${ddgst:-false} 00:11:19.220 }, 00:11:19.220 "method": "bdev_nvme_attach_controller" 00:11:19.220 } 00:11:19.220 EOF 00:11:19.220 )") 00:11:19.220 10:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:19.478 10:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:19.478 10:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:19.478 10:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:19.478 "params": { 00:11:19.478 "name": "Nvme1", 00:11:19.478 "trtype": "tcp", 00:11:19.478 "traddr": "10.0.0.2", 00:11:19.478 "adrfam": "ipv4", 00:11:19.478 "trsvcid": "4420", 00:11:19.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:19.478 "hdgst": false, 00:11:19.478 "ddgst": false 00:11:19.478 }, 00:11:19.478 "method": "bdev_nvme_attach_controller" 00:11:19.478 }' 00:11:19.478 [2024-11-19 10:39:09.048578] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:11:19.478 [2024-11-19 10:39:09.048627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805321 ] 00:11:19.478 [2024-11-19 10:39:09.125968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.478 [2024-11-19 10:39:09.169126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.736 Running I/O for 10 seconds... 00:11:21.606 8560.00 IOPS, 66.88 MiB/s [2024-11-19T09:39:12.775Z] 8644.50 IOPS, 67.54 MiB/s [2024-11-19T09:39:13.707Z] 8680.33 IOPS, 67.82 MiB/s [2024-11-19T09:39:14.641Z] 8717.00 IOPS, 68.10 MiB/s [2024-11-19T09:39:15.574Z] 8735.60 IOPS, 68.25 MiB/s [2024-11-19T09:39:16.508Z] 8748.67 IOPS, 68.35 MiB/s [2024-11-19T09:39:17.442Z] 8756.71 IOPS, 68.41 MiB/s [2024-11-19T09:39:18.377Z] 8763.00 IOPS, 68.46 MiB/s [2024-11-19T09:39:19.752Z] 8768.44 IOPS, 68.50 MiB/s [2024-11-19T09:39:19.752Z] 8776.80 IOPS, 68.57 MiB/s 00:11:29.960 Latency(us) 00:11:29.960 [2024-11-19T09:39:19.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.960 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:29.960 Verification LBA range: start 0x0 length 0x1000 00:11:29.960 Nvme1n1 : 10.01 8776.73 68.57 0.00 0.00 14543.17 2215.74 24092.28 00:11:29.960 [2024-11-19T09:39:19.752Z] =================================================================================================================== 00:11:29.960 [2024-11-19T09:39:19.752Z] Total : 8776.73 68.57 0.00 0.00 14543.17 2215.74 24092.28 00:11:29.960 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3807314 00:11:29.960 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:29.960 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.960 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:29.960 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:29.960 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:29.960 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:29.960 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:29.960 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:29.960 { 00:11:29.960 "params": { 00:11:29.960 "name": "Nvme$subsystem", 00:11:29.960 "trtype": "$TEST_TRANSPORT", 00:11:29.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:29.960 "adrfam": "ipv4", 00:11:29.960 "trsvcid": "$NVMF_PORT", 00:11:29.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:29.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:29.961 "hdgst": ${hdgst:-false}, 00:11:29.961 "ddgst": ${ddgst:-false} 00:11:29.961 }, 00:11:29.961 "method": "bdev_nvme_attach_controller" 00:11:29.961 } 00:11:29.961 EOF 00:11:29.961 )") 00:11:29.961 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:29.961 [2024-11-19 10:39:19.526463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.526498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:29.961 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:29.961 10:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:29.961 "params": { 00:11:29.961 "name": "Nvme1", 00:11:29.961 "trtype": "tcp", 00:11:29.961 "traddr": "10.0.0.2", 00:11:29.961 "adrfam": "ipv4", 00:11:29.961 "trsvcid": "4420", 00:11:29.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:29.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:29.961 "hdgst": false, 00:11:29.961 "ddgst": false 00:11:29.961 }, 00:11:29.961 "method": "bdev_nvme_attach_controller" 00:11:29.961 }' 00:11:29.961 [2024-11-19 10:39:19.538458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.538471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.550484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.550494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.562517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.562531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.567095] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:11:29.961 [2024-11-19 10:39:19.567135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807314 ] 00:11:29.961 [2024-11-19 10:39:19.574549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.574560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.586582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.586591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.598616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.598626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.610646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.610655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.622680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.622688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.634713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.634722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.643227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.961 [2024-11-19 10:39:19.646742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.646751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.658776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.658790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.670804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.670813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.682838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.682848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.685189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.961 [2024-11-19 10:39:19.694875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.694887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.706907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.706925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.718937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.718950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.730965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.730977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.961 [2024-11-19 10:39:19.742996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.961 [2024-11-19 10:39:19.743007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.755028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.219 [2024-11-19 10:39:19.755039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.767060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.219 [2024-11-19 10:39:19.767071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.779116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.219 [2024-11-19 10:39:19.779138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.791141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.219 [2024-11-19 10:39:19.791156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.803168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.219 [2024-11-19 10:39:19.803181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.815194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.219 [2024-11-19 10:39:19.815209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.827226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.219 [2024-11-19 10:39:19.827235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.839264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.219 [2024-11-19 10:39:19.839275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.851302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.219 [2024-11-19 10:39:19.851316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.863331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.219 [2024-11-19 10:39:19.863340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.875362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.219 [2024-11-19 10:39:19.875371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.219 [2024-11-19 10:39:19.887393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.220 [2024-11-19 10:39:19.887402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.220 [2024-11-19 10:39:19.899430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.220 [2024-11-19 10:39:19.899443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.220 [2024-11-19 10:39:19.911461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.220 [2024-11-19 10:39:19.911469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.220 [2024-11-19 10:39:19.923492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.220 [2024-11-19 10:39:19.923501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.220 [2024-11-19 10:39:19.935526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.220 [2024-11-19 10:39:19.935536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.220 [2024-11-19 10:39:19.947558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.220 [2024-11-19 10:39:19.947569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.220 [2024-11-19 10:39:19.959592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.220 [2024-11-19 10:39:19.959601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.220 [2024-11-19 10:39:19.971626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.220 [2024-11-19 10:39:19.971635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.220 [2024-11-19 10:39:19.983656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.220 [2024-11-19 10:39:19.983667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.220 [2024-11-19 10:39:19.995698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.220 [2024-11-19 10:39:19.995715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.220 Running I/O for 5 seconds... 00:11:30.220 [2024-11-19 10:39:20.007721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.220 [2024-11-19 10:39:20.007731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.020380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.020400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.031753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.031773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.045867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.045886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.055146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.055165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.069822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.069841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.080471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.080490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.094956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.094975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.108126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.108146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.121692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.121711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.136142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.136161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.147141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.147159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.161290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.161308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.175074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.175093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.189098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.189117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.202514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.202533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.216311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.216329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.229684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.229707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.243620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.243644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.478 [2024-11-19 10:39:20.257843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.478 [2024-11-19 10:39:20.257862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.271427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.271447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.285649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.285668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.299167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.299186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.312694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.312713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.326651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.326669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.340113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.340131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.354284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.354302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.368025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.368043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.382068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.382087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.395355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.395373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.410085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.410105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.421096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.421119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.430401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.430420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.445110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.445130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.460291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.460311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.474534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.474553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.488541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.488568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.502240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.502259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.737 [2024-11-19 10:39:20.516231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.737 [2024-11-19 10:39:20.516249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.530800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.530820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.545798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.545818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.559774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.559793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.573133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.573152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.587021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.587041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.600995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.601015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.615005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.615024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.629464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.629483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.644686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.644705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.658570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.658588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.672115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.672134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.686015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.686034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.700661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.700681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.715471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.715491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.729420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.729439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.743152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.743171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.756953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.756976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.770861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.770881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.996 [2024-11-19 10:39:20.784523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.996 [2024-11-19 10:39:20.784543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.798905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.798923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.814846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.814866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.825798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.825817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.839961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.839979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.853977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.853996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.867754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.867773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.881578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.881597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.894831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.894849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.909090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.909109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.920170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.920188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.934259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.934277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.947776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.947794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.961752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.961770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.975601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.975619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:20.989336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:20.989355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:21.002923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:21.002941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 16718.00 IOPS, 130.61 MiB/s [2024-11-19T09:39:21.047Z] [2024-11-19 10:39:21.017246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:21.017265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:21.030863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:21.030881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.255 [2024-11-19 10:39:21.044418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.255 [2024-11-19 10:39:21.044436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.513 [2024-11-19 10:39:21.058489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.513 [2024-11-19 10:39:21.058507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.072123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.072141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.085902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.085919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.099746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.099765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.113902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.113923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.127506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.127524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.141669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.141687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.152984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.153001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.167331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.167349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.181179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.181197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.195209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.195243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.208590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.208608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.221866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.221885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.235518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.235536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.249146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.249164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.263071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.263089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.277526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.277546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.514 [2024-11-19 10:39:21.293546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.514 [2024-11-19 10:39:21.293564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.307243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.307261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.321018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.321037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.334680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.334699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.348676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.348695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.362366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.362384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.375967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.375985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.389970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.389988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.403773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.403791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.417574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.417592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.431512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.431530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.445373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.445392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.458751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.458770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.473023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.473040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.488769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.488788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.502640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.502658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.516419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.516437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.530141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.530160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.543930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.543948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.772 [2024-11-19 10:39:21.557776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.772 [2024-11-19 10:39:21.557794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.571607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.571625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.585409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.585427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.598863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.598881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.607896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.607914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.622057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.622075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.635555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.635573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.649158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.649177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.663193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.663219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.677124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.677142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.691073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.691091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.704862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.704881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.718600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.718617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.732868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.732888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.746839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.746856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.760837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.760855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.774276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.774295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.788184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.788213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.801810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.801828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.031 [2024-11-19 10:39:21.815303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.031 [2024-11-19 10:39:21.815323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.829397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.829425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.842983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.843004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.857282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.857302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.868262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.868281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.882617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.882636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.896056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.896075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.909762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.909781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.923482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.923500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.932981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.932999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.942475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.942494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.952272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.952291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.967292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.967310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.978052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.978070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:21.992614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:21.992633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:22.006222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:22.006241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 16862.00 IOPS, 131.73 MiB/s [2024-11-19T09:39:22.082Z] [2024-11-19 10:39:22.019593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:22.019612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:22.033735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:22.033758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:22.047430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:22.047450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:22.061260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:22.061279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.290 [2024-11-19 10:39:22.075159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.290 [2024-11-19 10:39:22.075178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.089291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.089310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.100294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.100313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.114515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.114534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.128760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.128782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.142871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.142890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.156526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.156544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.170295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.170314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.184030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.184049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.198324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.198343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.209142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.209161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.218501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.218518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.232648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.232667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.246464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.246482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.260214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.260232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.273817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.273835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.287953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.287975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.301611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.301629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.315281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.315301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.549 [2024-11-19 10:39:22.328971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.549 [2024-11-19 10:39:22.328989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.807 [2024-11-19 10:39:22.342983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.807 [2024-11-19 10:39:22.343002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.807 [2024-11-19 10:39:22.356691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.807 [2024-11-19 10:39:22.356710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.807 [2024-11-19 10:39:22.370462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.807 [2024-11-19 10:39:22.370480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.807 [2024-11-19 10:39:22.384258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.807 [2024-11-19 10:39:22.384277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.807 [2024-11-19 10:39:22.398613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.807 [2024-11-19 10:39:22.398632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.807 [2024-11-19 10:39:22.412412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.807 [2024-11-19 10:39:22.412431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.807 [2024-11-19 10:39:22.426337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.807 [2024-11-19 10:39:22.426356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.807 [2024-11-19 10:39:22.440230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.807 [2024-11-19 10:39:22.440249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.807 [2024-11-19 10:39:22.450635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.807 [2024-11-19 10:39:22.450653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.808 [2024-11-19 10:39:22.460618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.808 [2024-11-19 10:39:22.460637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.808 [2024-11-19 10:39:22.475009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.808 [2024-11-19 10:39:22.475029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.808 [2024-11-19 10:39:22.488927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.808 [2024-11-19 10:39:22.488946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.808 [2024-11-19 10:39:22.502838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.808 [2024-11-19 10:39:22.502857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.808 [2024-11-19 10:39:22.516680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.808 [2024-11-19 10:39:22.516698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.808 [2024-11-19 10:39:22.530560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.808 [2024-11-19 10:39:22.530579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.808 [2024-11-19 10:39:22.544323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.808 [2024-11-19 10:39:22.544341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.808 [2024-11-19 10:39:22.557909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.808 [2024-11-19 10:39:22.557932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.808 [2024-11-19 10:39:22.571867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.808 [2024-11-19 10:39:22.571885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.808 [2024-11-19 10:39:22.585974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.808 [2024-11-19 10:39:22.585993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.600131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.600150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.610872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.610891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.624970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.624989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.638247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.638265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.651755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.651773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.665726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.665745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.679516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.679535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.693870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.693889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.704858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.704877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.718936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.718954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.732404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.732422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.746566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.746583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.760081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.760100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.773758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.773777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.787593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.787611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.801332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.801350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.815240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.815258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.828622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.828640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.842079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.842098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.066 [2024-11-19 10:39:22.855527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.066 [2024-11-19 10:39:22.855545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.337 [2024-11-19 10:39:22.869555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.337 [2024-11-19 10:39:22.869574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.337 [2024-11-19 10:39:22.882897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.337 [2024-11-19 10:39:22.882916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.337 [2024-11-19 10:39:22.896687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.337 [2024-11-19 10:39:22.896705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.337 [2024-11-19 10:39:22.909946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.337 [2024-11-19 10:39:22.909964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.337 [2024-11-19 10:39:22.923883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.337 [2024-11-19 10:39:22.923901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.337 [2024-11-19 10:39:22.937698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.337 [2024-11-19 10:39:22.937716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.337 [2024-11-19 10:39:22.951710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.337 [2024-11-19 10:39:22.951728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 [2024-11-19 10:39:22.962615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:22.962633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 [2024-11-19 10:39:22.976978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:22.976996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 [2024-11-19 10:39:22.991143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:22.991161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 [2024-11-19 10:39:23.005248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:23.005267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 16886.67 IOPS, 131.93 MiB/s [2024-11-19T09:39:23.130Z] [2024-11-19 10:39:23.018733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:23.018751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 [2024-11-19 10:39:23.033018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:23.033037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 [2024-11-19 10:39:23.044600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:23.044622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 [2024-11-19 10:39:23.059048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:23.059067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 [2024-11-19 10:39:23.072994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:23.073013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 [2024-11-19 10:39:23.087006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:23.087025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 [2024-11-19 10:39:23.100507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:23.100525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.338 [2024-11-19 10:39:23.114714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.338 [2024-11-19 10:39:23.114732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.129072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.129091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.143146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.143164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.154105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.154123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.168609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.168627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.181813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.181832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.195827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.195846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.209466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.209485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.223513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.223533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.237227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.237248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.251322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.251342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.265262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.265281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.279263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.279282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.292763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.292782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.306517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.306541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.320285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.320304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.334179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.334200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.348083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.348102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.362420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.362440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.375868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.375887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.607 [2024-11-19 10:39:23.390631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.607 [2024-11-19 10:39:23.390649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.406539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.406558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.420447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.420466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.434446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.434465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.448534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.448553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.461994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.462014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.475845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.475863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.489927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.489947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.500322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.500340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.509826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.509845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.523880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.523899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.537674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.537692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.547146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.547164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.561314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.561336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.575410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.575429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.589446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.589465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.603207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.603228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.617362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.617381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.631397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.631416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.866 [2024-11-19 10:39:23.645356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.866 [2024-11-19 10:39:23.645374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.659266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.659284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.672809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.672827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.686602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.686620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.700766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.700783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.714700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.714719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.728808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.728826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.742694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.742713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.756535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.756553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.770777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.770794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.784984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.785002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.795788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.795805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.810185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.810212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.823717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.823739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.837252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.837270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.851296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.851315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.864913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.864931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.878538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.878557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.892589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.892608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.124 [2024-11-19 10:39:23.906700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.124 [2024-11-19 10:39:23.906718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.383 [2024-11-19 10:39:23.920467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.383 [2024-11-19 10:39:23.920484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.383 [2024-11-19 10:39:23.934645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.383 [2024-11-19 10:39:23.934663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.383 [2024-11-19 10:39:23.945799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.383 [2024-11-19 10:39:23.945818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.383 [2024-11-19 10:39:23.959752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.383 [2024-11-19 10:39:23.959770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.383 [2024-11-19 10:39:23.973491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.383 [2024-11-19 10:39:23.973509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.383 [2024-11-19 10:39:23.987344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.383 [2024-11-19 10:39:23.987362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.383 [2024-11-19 10:39:24.001230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.383 [2024-11-19 10:39:24.001248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 [2024-11-19 10:39:24.015379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.015397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 16871.00 IOPS, 131.80 MiB/s [2024-11-19T09:39:24.176Z] [2024-11-19 10:39:24.028996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.029014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 [2024-11-19 10:39:24.042243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.042260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 [2024-11-19 10:39:24.056411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.056429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 [2024-11-19 10:39:24.069994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.070011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 [2024-11-19 10:39:24.083604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.083623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 [2024-11-19 10:39:24.092969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.092988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 [2024-11-19 10:39:24.106881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.106898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 [2024-11-19 10:39:24.120375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.120393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 [2024-11-19 10:39:24.134313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.134331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 [2024-11-19 10:39:24.148362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.148381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.384 [2024-11-19 10:39:24.161977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.384 [2024-11-19 10:39:24.161996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.176192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.176217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.187528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.187546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.202029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.202047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.215762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.215781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.229590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.229609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.242811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.242829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.256908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.256926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.271125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.271143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.284856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.284875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.298839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.298858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.312388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.312406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.326296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.664 [2024-11-19 10:39:24.326315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.664 [2024-11-19 10:39:24.340219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.665 [2024-11-19 10:39:24.340237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.665 [2024-11-19 10:39:24.353688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.665 [2024-11-19 10:39:24.353708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.665 [2024-11-19 10:39:24.367495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.665 [2024-11-19 10:39:24.367515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.665 [2024-11-19 10:39:24.381370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.665 [2024-11-19 10:39:24.381388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.665 [2024-11-19 10:39:24.394795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.665 [2024-11-19 10:39:24.394814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.665 [2024-11-19 10:39:24.408952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.665 [2024-11-19 10:39:24.408970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.665 [2024-11-19 10:39:24.422327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.665 [2024-11-19 10:39:24.422345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.436546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.436569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.447631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.447649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.462320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.462338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.475962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.475981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.490016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.490034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.503762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.503780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.517718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.517735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.531625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.531643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.545043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.545062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.558692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.558710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.573022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.573041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.587397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.587416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.598453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.598481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.612563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.612582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.626368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.626387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.640272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.640291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.654123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.654142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.668612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.668632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.683237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.683256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.697557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.697576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.708159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.708177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.722495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.722514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.958 [2024-11-19 10:39:24.736361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.958 [2024-11-19 10:39:24.736380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.750074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.750094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.764231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.764250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.777983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.778002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.792170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.792189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.803172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.803190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.817077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.817095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.830416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.830435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.844755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.844778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.856036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.856055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.869996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.870015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.884130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.884150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.895154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.895173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.909118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.909137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.922603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.922621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.936224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.936243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.949658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.949677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.963189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.963215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.972595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.972613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:24.986870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:24.986888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.225 [2024-11-19 10:39:25.001097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.225 [2024-11-19 10:39:25.001116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.501 [2024-11-19 10:39:25.015209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.501 [2024-11-19 10:39:25.015227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.501 16902.60 IOPS, 132.05 MiB/s [2024-11-19T09:39:25.293Z] [2024-11-19 10:39:25.026004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.501 [2024-11-19 10:39:25.026024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.501 00:11:35.501 Latency(us) 00:11:35.501 [2024-11-19T09:39:25.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.501 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:35.501 Nvme1n1 : 5.01 16904.55 132.07 0.00 0.00 7564.21 3089.55 18100.42 00:11:35.501 [2024-11-19T09:39:25.293Z] =================================================================================================================== 00:11:35.501 [2024-11-19T09:39:25.293Z] Total : 16904.55 132.07 0.00 0.00 7564.21 3089.55 18100.42 00:11:35.501 [2024-11-19 10:39:25.037472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.501 [2024-11-19 10:39:25.037488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.049476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.049494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.061515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.061535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.073540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.073553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.085573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.085587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.097604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.097617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.109636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.109649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.121667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.121680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.133700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.133713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.145736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.145749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.157769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.157779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.169796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.169807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 [2024-11-19 10:39:25.181833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.502 [2024-11-19 10:39:25.181843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3807314) - No such process 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3807314 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.502 delay0 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.502 10:39:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:35.798 [2024-11-19 10:39:25.299825] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:42.356 Initializing NVMe Controllers 00:11:42.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:42.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:42.356 Initialization complete. Launching workers. 00:11:42.356 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 842 00:11:42.356 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1129, failed to submit 33 00:11:42.356 success 935, unsuccessful 194, failed 0 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.356 rmmod nvme_tcp 00:11:42.356 rmmod nvme_fabrics 00:11:42.356 rmmod nvme_keyring 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3804920 ']' 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3804920 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3804920 ']' 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3804920 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3804920 00:11:42.356 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3804920' 00:11:42.357 killing process with pid 3804920 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3804920 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3804920 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.357 10:39:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.258 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.258 00:11:44.258 real 0m32.034s 00:11:44.258 user 0m42.528s 00:11:44.258 sys 0m11.294s 00:11:44.258 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.258 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.258 ************************************ 00:11:44.258 END TEST nvmf_zcopy 00:11:44.258 ************************************ 00:11:44.258 10:39:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:44.259 10:39:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.259 10:39:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.259 10:39:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:44.259 ************************************ 00:11:44.259 START TEST nvmf_nmic 00:11:44.259 ************************************ 00:11:44.259 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:44.259 * Looking for test storage... 00:11:44.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.259 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:44.259 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:44.259 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:44.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.518 --rc genhtml_branch_coverage=1 00:11:44.518 --rc genhtml_function_coverage=1 00:11:44.518 --rc genhtml_legend=1 00:11:44.518 --rc geninfo_all_blocks=1 00:11:44.518 --rc geninfo_unexecuted_blocks=1 00:11:44.518 00:11:44.518 ' 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:44.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.518 --rc genhtml_branch_coverage=1 00:11:44.518 --rc genhtml_function_coverage=1 00:11:44.518 --rc genhtml_legend=1 00:11:44.518 --rc geninfo_all_blocks=1 00:11:44.518 --rc geninfo_unexecuted_blocks=1 00:11:44.518 00:11:44.518 ' 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:44.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.518 --rc genhtml_branch_coverage=1 00:11:44.518 --rc genhtml_function_coverage=1 00:11:44.518 --rc genhtml_legend=1 00:11:44.518 --rc geninfo_all_blocks=1 00:11:44.518 --rc geninfo_unexecuted_blocks=1 00:11:44.518 00:11:44.518 ' 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:44.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.518 --rc genhtml_branch_coverage=1 00:11:44.518 --rc genhtml_function_coverage=1 00:11:44.518 --rc genhtml_legend=1 00:11:44.518 --rc geninfo_all_blocks=1 00:11:44.518 --rc geninfo_unexecuted_blocks=1 00:11:44.518 00:11:44.518 ' 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:44.518 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.519 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:51.087 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:51.087 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:51.087 Found net devices under 0000:86:00.0: cvl_0_0 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:51.087 Found net devices under 0000:86:00.1: cvl_0_1 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.087 10:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.087 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.087 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.087 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.087 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:11:51.087 00:11:51.087 --- 10.0.0.2 ping statistics --- 00:11:51.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.087 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:11:51.087 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:11:51.087 00:11:51.087 --- 10.0.0.1 ping statistics --- 00:11:51.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.088 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3812715 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3812715 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3812715 ']' 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 [2024-11-19 10:39:40.155160] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:11:51.088 [2024-11-19 10:39:40.155214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.088 [2024-11-19 10:39:40.224581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.088 [2024-11-19 10:39:40.268112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.088 [2024-11-19 10:39:40.268149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.088 [2024-11-19 10:39:40.268157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.088 [2024-11-19 10:39:40.268163] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.088 [2024-11-19 10:39:40.268168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.088 [2024-11-19 10:39:40.269748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.088 [2024-11-19 10:39:40.269789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.088 [2024-11-19 10:39:40.269895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.088 [2024-11-19 10:39:40.269895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 [2024-11-19 10:39:40.414901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 Malloc0 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 [2024-11-19 10:39:40.483468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:51.088 test case1: single bdev can't be used in multiple subsystems 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 [2024-11-19 10:39:40.507377] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:51.088 [2024-11-19 10:39:40.507402] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:51.088 [2024-11-19 10:39:40.507409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.088 request: 00:11:51.088 { 00:11:51.088 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:51.088 "namespace": { 00:11:51.088 "bdev_name": "Malloc0", 00:11:51.088 "no_auto_visible": false 00:11:51.088 }, 00:11:51.088 "method": "nvmf_subsystem_add_ns", 00:11:51.088 "req_id": 1 00:11:51.088 } 00:11:51.088 Got JSON-RPC error response 00:11:51.088 response: 00:11:51.088 { 00:11:51.088 "code": -32602, 00:11:51.088 "message": "Invalid parameters" 00:11:51.088 } 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:51.088 Adding namespace failed - expected result. 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:51.088 test case2: host connect to nvmf target in multiple paths 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 [2024-11-19 10:39:40.519524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.088 10:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.023 10:39:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:53.399 10:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.399 10:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:53.399 10:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.399 10:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:53.399 10:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:55.309 10:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:55.309 10:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:55.309 10:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.309 10:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:55.309 10:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.309 10:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:55.309 10:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:55.309 [global] 00:11:55.309 thread=1 00:11:55.309 invalidate=1 00:11:55.309 rw=write 00:11:55.309 time_based=1 00:11:55.309 runtime=1 00:11:55.309 ioengine=libaio 00:11:55.309 direct=1 00:11:55.309 bs=4096 00:11:55.309 iodepth=1 00:11:55.309 norandommap=0 00:11:55.309 numjobs=1 00:11:55.309 00:11:55.309 verify_dump=1 00:11:55.309 verify_backlog=512 00:11:55.309 verify_state_save=0 00:11:55.309 do_verify=1 00:11:55.309 verify=crc32c-intel 00:11:55.309 [job0] 00:11:55.309 filename=/dev/nvme0n1 00:11:55.309 Could not set queue depth (nvme0n1) 00:11:55.566 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.566 fio-3.35 00:11:55.566 Starting 1 thread 00:11:56.933 00:11:56.933 job0: (groupid=0, jobs=1): err= 0: pid=3813780: Tue Nov 19 10:39:46 2024 00:11:56.933 read: IOPS=2389, BW=9558KiB/s (9788kB/s)(9568KiB/1001msec) 00:11:56.933 slat (nsec): min=6391, max=26889, avg=7413.03, stdev=965.76 00:11:56.933 clat (usec): min=181, max=41426, avg=253.99, stdev=843.24 00:11:56.933 lat (usec): min=188, max=41433, avg=261.40, stdev=843.24 00:11:56.933 clat percentiles (usec): 00:11:56.933 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:11:56.933 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 243], 00:11:56.933 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:11:56.933 | 99.00th=[ 302], 99.50th=[ 371], 99.90th=[ 1172], 99.95th=[ 1303], 00:11:56.933 | 99.99th=[41681] 00:11:56.933 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:56.933 slat (nsec): min=9779, max=39337, avg=10895.81, stdev=1320.85 00:11:56.933 clat (usec): min=107, max=1181, avg=131.42, stdev=31.77 00:11:56.933 lat (usec): min=118, max=1192, avg=142.32, stdev=31.95 00:11:56.933 clat percentiles (usec): 00:11:56.933 | 1.00th=[ 114], 5.00th=[ 117], 10.00th=[ 119], 20.00th=[ 121], 00:11:56.933 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 127], 00:11:56.933 | 70.00th=[ 130], 80.00th=[ 137], 90.00th=[ 161], 95.00th=[ 169], 00:11:56.933 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 388], 99.95th=[ 971], 00:11:56.933 | 99.99th=[ 1188] 00:11:56.933 bw ( KiB/s): min=10704, max=10704, per=100.00%, avg=10704.00, stdev= 0.00, samples=1 00:11:56.933 iops : min= 2676, max= 2676, avg=2676.00, stdev= 0.00, samples=1 00:11:56.933 lat (usec) : 250=83.95%, 500=15.93%, 1000=0.02% 00:11:56.933 lat (msec) : 2=0.08%, 50=0.02% 00:11:56.933 cpu : usr=3.10%, sys=4.00%, ctx=4952, majf=0, minf=1 00:11:56.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.933 issued rwts: total=2392,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.933 00:11:56.933 Run status group 0 (all jobs): 00:11:56.933 READ: bw=9558KiB/s (9788kB/s), 9558KiB/s-9558KiB/s (9788kB/s-9788kB/s), io=9568KiB (9798kB), run=1001-1001msec 00:11:56.933 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:11:56.933 00:11:56.933 Disk stats (read/write): 00:11:56.933 nvme0n1: ios=2098/2388, merge=0/0, ticks=536/298, in_queue=834, util=91.18% 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.933 rmmod nvme_tcp 00:11:56.933 rmmod nvme_fabrics 00:11:56.933 rmmod nvme_keyring 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3812715 ']' 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3812715 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3812715 ']' 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3812715 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:56.933 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.934 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3812715 00:11:56.934 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.934 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.934 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3812715' 00:11:56.934 killing process with pid 3812715 00:11:56.934 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3812715 00:11:56.934 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3812715 00:11:57.192 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:57.192 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:57.192 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:57.192 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:57.193 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:57.193 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:57.193 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:57.193 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:57.193 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:57.193 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.193 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.193 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.096 10:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:59.096 00:11:59.096 real 0m14.940s 00:11:59.096 user 0m33.309s 00:11:59.096 sys 0m5.381s 00:11:59.096 10:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.096 10:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:59.096 ************************************ 00:11:59.096 END TEST nvmf_nmic 00:11:59.096 ************************************ 00:11:59.096 10:39:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:59.096 10:39:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.096 10:39:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.096 10:39:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:59.355 ************************************ 00:11:59.355 START TEST nvmf_fio_target 00:11:59.355 ************************************ 00:11:59.355 10:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:59.355 * Looking for test storage... 00:11:59.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:59.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.355 --rc genhtml_branch_coverage=1 00:11:59.355 --rc genhtml_function_coverage=1 00:11:59.355 --rc genhtml_legend=1 00:11:59.355 --rc geninfo_all_blocks=1 00:11:59.355 --rc geninfo_unexecuted_blocks=1 00:11:59.355 00:11:59.355 ' 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:59.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.355 --rc genhtml_branch_coverage=1 00:11:59.355 --rc genhtml_function_coverage=1 00:11:59.355 --rc genhtml_legend=1 00:11:59.355 --rc geninfo_all_blocks=1 00:11:59.355 --rc geninfo_unexecuted_blocks=1 00:11:59.355 00:11:59.355 ' 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:59.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.355 --rc genhtml_branch_coverage=1 00:11:59.355 --rc genhtml_function_coverage=1 00:11:59.355 --rc genhtml_legend=1 00:11:59.355 --rc geninfo_all_blocks=1 00:11:59.355 --rc geninfo_unexecuted_blocks=1 00:11:59.355 00:11:59.355 ' 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:59.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.355 --rc genhtml_branch_coverage=1 00:11:59.355 --rc genhtml_function_coverage=1 00:11:59.355 --rc genhtml_legend=1 00:11:59.355 --rc geninfo_all_blocks=1 00:11:59.355 --rc geninfo_unexecuted_blocks=1 00:11:59.355 00:11:59.355 ' 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.355 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.356 10:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:05.923 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.923 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:05.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:05.924 Found net devices under 0000:86:00.0: cvl_0_0 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:05.924 Found net devices under 0000:86:00.1: cvl_0_1 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.924 10:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:12:05.924 00:12:05.924 --- 10.0.0.2 ping statistics --- 00:12:05.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.924 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:12:05.924 00:12:05.924 --- 10.0.0.1 ping statistics --- 00:12:05.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.924 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3817542 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3817542 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3817542 ']' 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.924 10:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.924 [2024-11-19 10:39:55.213041] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:12:05.924 [2024-11-19 10:39:55.213086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.924 [2024-11-19 10:39:55.292594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.924 [2024-11-19 10:39:55.332908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.924 [2024-11-19 10:39:55.332945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.924 [2024-11-19 10:39:55.332952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.924 [2024-11-19 10:39:55.332958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.924 [2024-11-19 10:39:55.332962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.924 [2024-11-19 10:39:55.334595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.924 [2024-11-19 10:39:55.334701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.924 [2024-11-19 10:39:55.334804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.925 [2024-11-19 10:39:55.334805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.487 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.487 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:06.487 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:06.487 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.487 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.487 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.487 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:06.487 [2024-11-19 10:39:56.261992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.744 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:06.744 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:06.744 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.001 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:07.001 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.258 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:07.258 10:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.515 10:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:07.515 10:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:07.771 10:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.771 10:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:07.771 10:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:08.028 10:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:08.028 10:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:08.284 10:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:08.284 10:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:08.541 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.798 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:08.798 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.798 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:08.798 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:09.054 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.311 [2024-11-19 10:39:58.950611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.311 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:09.569 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:09.826 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.757 10:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:10.757 10:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.757 10:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.757 10:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:10.757 10:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:10.757 10:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:12.691 10:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:12.947 10:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:12.947 10:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.947 10:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:12.947 10:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.947 10:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:12.947 10:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:12.947 [global] 00:12:12.947 thread=1 00:12:12.947 invalidate=1 00:12:12.947 rw=write 00:12:12.947 time_based=1 00:12:12.947 runtime=1 00:12:12.947 ioengine=libaio 00:12:12.947 direct=1 00:12:12.947 bs=4096 00:12:12.947 iodepth=1 00:12:12.947 norandommap=0 00:12:12.947 numjobs=1 00:12:12.947 00:12:12.947 verify_dump=1 00:12:12.947 verify_backlog=512 00:12:12.947 verify_state_save=0 00:12:12.947 do_verify=1 00:12:12.947 verify=crc32c-intel 00:12:12.947 [job0] 00:12:12.947 filename=/dev/nvme0n1 00:12:12.947 [job1] 00:12:12.947 filename=/dev/nvme0n2 00:12:12.947 [job2] 00:12:12.947 filename=/dev/nvme0n3 00:12:12.947 [job3] 00:12:12.947 filename=/dev/nvme0n4 00:12:12.947 Could not set queue depth (nvme0n1) 00:12:12.947 Could not set queue depth (nvme0n2) 00:12:12.947 Could not set queue depth (nvme0n3) 00:12:12.947 Could not set queue depth (nvme0n4) 00:12:13.204 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.204 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.204 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.204 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.204 fio-3.35 00:12:13.204 Starting 4 threads 00:12:14.596 00:12:14.596 job0: (groupid=0, jobs=1): err= 0: pid=3819050: Tue Nov 19 10:40:04 2024 00:12:14.596 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:12:14.596 slat (nsec): min=9953, max=23470, avg=21919.77, stdev=2691.87 00:12:14.596 clat (usec): min=40855, max=41410, avg=40985.86, stdev=109.59 00:12:14.596 lat (usec): min=40877, max=41420, avg=41007.78, stdev=107.27 00:12:14.596 clat percentiles (usec): 00:12:14.596 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:12:14.596 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:14.596 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:14.596 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:14.596 | 99.99th=[41157] 00:12:14.596 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:12:14.596 slat (nsec): min=10848, max=43777, avg=12947.01, stdev=3045.31 00:12:14.596 clat (usec): min=138, max=347, avg=180.32, stdev=17.64 00:12:14.596 lat (usec): min=151, max=358, avg=193.27, stdev=17.44 00:12:14.596 clat percentiles (usec): 00:12:14.596 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 167], 00:12:14.596 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:12:14.596 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 202], 00:12:14.596 | 99.00th=[ 212], 99.50th=[ 262], 99.90th=[ 347], 99.95th=[ 347], 00:12:14.596 | 99.99th=[ 347] 00:12:14.596 bw ( KiB/s): min= 4096, max= 4096, per=50.35%, avg=4096.00, stdev= 0.00, samples=1 00:12:14.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:14.596 lat (usec) : 250=95.32%, 500=0.56% 00:12:14.596 lat (msec) : 50=4.12% 00:12:14.596 cpu : usr=0.60%, sys=0.70%, ctx=536, majf=0, minf=1 00:12:14.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.596 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.596 job1: (groupid=0, jobs=1): err= 0: pid=3819074: Tue Nov 19 10:40:04 2024 00:12:14.596 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:12:14.596 slat (nsec): min=11211, max=37953, avg=23069.86, stdev=5330.57 00:12:14.596 clat (usec): min=40839, max=41300, avg=40981.96, stdev=80.93 00:12:14.596 lat (usec): min=40861, max=41311, avg=41005.03, stdev=78.94 00:12:14.596 clat percentiles (usec): 00:12:14.596 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:14.596 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:14.596 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:14.596 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:14.596 | 99.99th=[41157] 00:12:14.596 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:12:14.596 slat (nsec): min=11153, max=98570, avg=13857.81, stdev=4233.16 00:12:14.596 clat (usec): min=142, max=237, avg=182.82, stdev=12.45 00:12:14.596 lat (usec): min=155, max=310, avg=196.68, stdev=13.58 00:12:14.596 clat percentiles (usec): 00:12:14.596 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:12:14.596 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:12:14.596 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 204], 00:12:14.596 | 99.00th=[ 223], 99.50th=[ 225], 99.90th=[ 239], 99.95th=[ 239], 00:12:14.596 | 99.99th=[ 239] 00:12:14.596 bw ( KiB/s): min= 4096, max= 4096, per=50.35%, avg=4096.00, stdev= 0.00, samples=1 00:12:14.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:14.596 lat (usec) : 250=95.88% 00:12:14.596 lat (msec) : 50=4.12% 00:12:14.596 cpu : usr=1.10%, sys=0.40%, ctx=535, majf=0, minf=1 00:12:14.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.596 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.596 job2: (groupid=0, jobs=1): err= 0: pid=3819092: Tue Nov 19 10:40:04 2024 00:12:14.596 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:12:14.596 slat (nsec): min=10828, max=25109, avg=22923.32, stdev=2944.87 00:12:14.596 clat (usec): min=40896, max=41302, avg=40981.54, stdev=80.43 00:12:14.596 lat (usec): min=40920, max=41313, avg=41004.46, stdev=77.90 00:12:14.596 clat percentiles (usec): 00:12:14.596 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:14.596 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:14.596 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:14.596 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:14.596 | 99.99th=[41157] 00:12:14.596 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:12:14.596 slat (nsec): min=11398, max=40534, avg=12816.92, stdev=1964.86 00:12:14.596 clat (usec): min=139, max=308, avg=184.73, stdev=14.84 00:12:14.596 lat (usec): min=150, max=320, avg=197.55, stdev=15.07 00:12:14.596 clat percentiles (usec): 00:12:14.596 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:12:14.596 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:12:14.596 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:12:14.596 | 99.00th=[ 235], 99.50th=[ 260], 99.90th=[ 310], 99.95th=[ 310], 00:12:14.596 | 99.99th=[ 310] 00:12:14.596 bw ( KiB/s): min= 4096, max= 4096, per=50.35%, avg=4096.00, stdev= 0.00, samples=1 00:12:14.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:14.596 lat (usec) : 250=95.32%, 500=0.56% 00:12:14.596 lat (msec) : 50=4.12% 00:12:14.596 cpu : usr=0.50%, sys=0.90%, ctx=534, majf=0, minf=2 00:12:14.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.596 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.596 job3: (groupid=0, jobs=1): err= 0: pid=3819098: Tue Nov 19 10:40:04 2024 00:12:14.596 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:12:14.596 slat (nsec): min=9452, max=24303, avg=22712.09, stdev=2981.91 00:12:14.596 clat (usec): min=40690, max=42061, avg=41185.10, stdev=446.39 00:12:14.596 lat (usec): min=40699, max=42084, avg=41207.81, stdev=447.06 00:12:14.596 clat percentiles (usec): 00:12:14.596 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:12:14.596 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:14.596 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:12:14.596 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:14.596 | 99.99th=[42206] 00:12:14.596 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:12:14.596 slat (nsec): min=9855, max=61522, avg=10922.12, stdev=2477.62 00:12:14.596 clat (usec): min=145, max=318, avg=182.63, stdev=16.71 00:12:14.596 lat (usec): min=155, max=380, avg=193.55, stdev=17.64 00:12:14.596 clat percentiles (usec): 00:12:14.596 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:12:14.596 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:12:14.596 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 204], 00:12:14.596 | 99.00th=[ 225], 99.50th=[ 260], 99.90th=[ 318], 99.95th=[ 318], 00:12:14.596 | 99.99th=[ 318] 00:12:14.596 bw ( KiB/s): min= 4096, max= 4096, per=50.35%, avg=4096.00, stdev= 0.00, samples=1 00:12:14.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:14.596 lat (usec) : 250=95.13%, 500=0.75% 00:12:14.596 lat (msec) : 50=4.12% 00:12:14.596 cpu : usr=0.00%, sys=0.80%, ctx=535, majf=0, minf=1 00:12:14.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.596 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.596 00:12:14.596 Run status group 0 (all jobs): 00:12:14.596 READ: bw=350KiB/s (358kB/s), 87.4KiB/s-87.7KiB/s (89.5kB/s-89.8kB/s), io=352KiB (360kB), run=1003-1007msec 00:12:14.596 WRITE: bw=8135KiB/s (8330kB/s), 2034KiB/s-2042KiB/s (2083kB/s-2091kB/s), io=8192KiB (8389kB), run=1003-1007msec 00:12:14.596 00:12:14.596 Disk stats (read/write): 00:12:14.596 nvme0n1: ios=41/512, merge=0/0, ticks=1601/88, in_queue=1689, util=84.77% 00:12:14.597 nvme0n2: ios=55/512, merge=0/0, ticks=1694/93, in_queue=1787, util=88.63% 00:12:14.597 nvme0n3: ios=75/512, merge=0/0, ticks=815/89, in_queue=904, util=94.00% 00:12:14.597 nvme0n4: ios=41/512, merge=0/0, ticks=1649/93, in_queue=1742, util=94.06% 00:12:14.597 10:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:14.597 [global] 00:12:14.597 thread=1 00:12:14.597 invalidate=1 00:12:14.597 rw=randwrite 00:12:14.597 time_based=1 00:12:14.597 runtime=1 00:12:14.597 ioengine=libaio 00:12:14.597 direct=1 00:12:14.597 bs=4096 00:12:14.597 iodepth=1 00:12:14.597 norandommap=0 00:12:14.597 numjobs=1 00:12:14.597 00:12:14.597 verify_dump=1 00:12:14.597 verify_backlog=512 00:12:14.597 verify_state_save=0 00:12:14.597 do_verify=1 00:12:14.597 verify=crc32c-intel 00:12:14.597 [job0] 00:12:14.597 filename=/dev/nvme0n1 00:12:14.597 [job1] 00:12:14.597 filename=/dev/nvme0n2 00:12:14.597 [job2] 00:12:14.597 filename=/dev/nvme0n3 00:12:14.597 [job3] 00:12:14.597 filename=/dev/nvme0n4 00:12:14.597 Could not set queue depth (nvme0n1) 00:12:14.597 Could not set queue depth (nvme0n2) 00:12:14.597 Could not set queue depth (nvme0n3) 00:12:14.597 Could not set queue depth (nvme0n4) 00:12:14.857 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.857 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.857 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.857 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.857 fio-3.35 00:12:14.857 Starting 4 threads 00:12:16.225 00:12:16.225 job0: (groupid=0, jobs=1): err= 0: pid=3819495: Tue Nov 19 10:40:05 2024 00:12:16.225 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:12:16.225 slat (nsec): min=10165, max=26175, avg=24474.95, stdev=3240.68 00:12:16.225 clat (usec): min=40911, max=42021, avg=41347.69, stdev=479.42 00:12:16.225 lat (usec): min=40937, max=42047, avg=41372.16, stdev=479.14 00:12:16.225 clat percentiles (usec): 00:12:16.225 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:16.225 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:16.225 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:16.225 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:16.225 | 99.99th=[42206] 00:12:16.225 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:12:16.225 slat (nsec): min=10587, max=44204, avg=13060.31, stdev=2361.17 00:12:16.225 clat (usec): min=141, max=306, avg=176.95, stdev=17.66 00:12:16.225 lat (usec): min=153, max=350, avg=190.01, stdev=18.45 00:12:16.225 clat percentiles (usec): 00:12:16.225 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:12:16.225 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:12:16.225 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 210], 00:12:16.225 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 306], 99.95th=[ 306], 00:12:16.225 | 99.99th=[ 306] 00:12:16.225 bw ( KiB/s): min= 4096, max= 4096, per=25.28%, avg=4096.00, stdev= 0.00, samples=1 00:12:16.225 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:16.225 lat (usec) : 250=95.32%, 500=0.56% 00:12:16.225 lat (msec) : 50=4.12% 00:12:16.225 cpu : usr=0.59%, sys=0.89%, ctx=536, majf=0, minf=1 00:12:16.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.225 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.225 job1: (groupid=0, jobs=1): err= 0: pid=3819496: Tue Nov 19 10:40:05 2024 00:12:16.225 read: IOPS=2277, BW=9111KiB/s (9330kB/s)(9120KiB/1001msec) 00:12:16.225 slat (nsec): min=6660, max=28593, avg=7608.20, stdev=1002.38 00:12:16.225 clat (usec): min=149, max=41842, avg=256.53, stdev=1503.24 00:12:16.225 lat (usec): min=156, max=41850, avg=264.13, stdev=1503.26 00:12:16.225 clat percentiles (usec): 00:12:16.225 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:12:16.225 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:12:16.225 | 70.00th=[ 198], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 367], 00:12:16.225 | 99.00th=[ 379], 99.50th=[ 383], 99.90th=[41157], 99.95th=[41681], 00:12:16.225 | 99.99th=[41681] 00:12:16.225 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:16.225 slat (nsec): min=9727, max=64454, avg=10780.83, stdev=1581.52 00:12:16.225 clat (usec): min=103, max=314, avg=139.05, stdev=27.44 00:12:16.225 lat (usec): min=114, max=379, avg=149.83, stdev=27.83 00:12:16.225 clat percentiles (usec): 00:12:16.225 | 1.00th=[ 106], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 116], 00:12:16.225 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 127], 60.00th=[ 139], 00:12:16.225 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 186], 00:12:16.225 | 99.00th=[ 206], 99.50th=[ 225], 99.90th=[ 249], 99.95th=[ 258], 00:12:16.225 | 99.99th=[ 314] 00:12:16.225 bw ( KiB/s): min= 8192, max= 8192, per=50.55%, avg=8192.00, stdev= 0.00, samples=1 00:12:16.225 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:16.225 lat (usec) : 250=96.07%, 500=3.86% 00:12:16.225 lat (msec) : 50=0.06% 00:12:16.225 cpu : usr=2.30%, sys=4.90%, ctx=4841, majf=0, minf=1 00:12:16.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.225 issued rwts: total=2280,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.225 job2: (groupid=0, jobs=1): err= 0: pid=3819497: Tue Nov 19 10:40:05 2024 00:12:16.225 read: IOPS=22, BW=91.0KiB/s (93.2kB/s)(92.0KiB/1011msec) 00:12:16.225 slat (nsec): min=11082, max=22717, avg=19861.43, stdev=3753.70 00:12:16.225 clat (usec): min=209, max=42092, avg=39451.96, stdev=8567.41 00:12:16.225 lat (usec): min=231, max=42108, avg=39471.82, stdev=8566.81 00:12:16.225 clat percentiles (usec): 00:12:16.225 | 1.00th=[ 210], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:12:16.225 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:16.225 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:16.225 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:16.225 | 99.99th=[42206] 00:12:16.225 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:12:16.225 slat (nsec): min=11147, max=41613, avg=12492.34, stdev=1709.01 00:12:16.225 clat (usec): min=143, max=292, avg=184.29, stdev=19.66 00:12:16.225 lat (usec): min=156, max=333, avg=196.78, stdev=20.05 00:12:16.225 clat percentiles (usec): 00:12:16.225 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:12:16.225 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:12:16.225 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 239], 00:12:16.225 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 293], 99.95th=[ 293], 00:12:16.225 | 99.99th=[ 293] 00:12:16.225 bw ( KiB/s): min= 4096, max= 4096, per=25.28%, avg=4096.00, stdev= 0.00, samples=1 00:12:16.225 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:16.225 lat (usec) : 250=95.51%, 500=0.37% 00:12:16.225 lat (msec) : 50=4.11% 00:12:16.225 cpu : usr=0.40%, sys=0.50%, ctx=536, majf=0, minf=1 00:12:16.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.226 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.226 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.226 job3: (groupid=0, jobs=1): err= 0: pid=3819498: Tue Nov 19 10:40:05 2024 00:12:16.226 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:12:16.226 slat (nsec): min=12162, max=27436, avg=24222.73, stdev=3243.87 00:12:16.226 clat (usec): min=40809, max=41042, avg=40960.85, stdev=59.84 00:12:16.226 lat (usec): min=40832, max=41069, avg=40985.07, stdev=60.43 00:12:16.226 clat percentiles (usec): 00:12:16.226 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:12:16.226 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:16.226 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:16.226 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:16.226 | 99.99th=[41157] 00:12:16.226 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:12:16.226 slat (nsec): min=10796, max=50155, avg=12844.64, stdev=2968.01 00:12:16.226 clat (usec): min=149, max=317, avg=184.02, stdev=14.24 00:12:16.226 lat (usec): min=167, max=367, avg=196.86, stdev=15.10 00:12:16.226 clat percentiles (usec): 00:12:16.226 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:12:16.226 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:12:16.226 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:12:16.226 | 99.00th=[ 223], 99.50th=[ 253], 99.90th=[ 318], 99.95th=[ 318], 00:12:16.226 | 99.99th=[ 318] 00:12:16.226 bw ( KiB/s): min= 4096, max= 4096, per=25.28%, avg=4096.00, stdev= 0.00, samples=1 00:12:16.226 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:16.226 lat (usec) : 250=95.32%, 500=0.56% 00:12:16.226 lat (msec) : 50=4.12% 00:12:16.226 cpu : usr=0.30%, sys=1.10%, ctx=535, majf=0, minf=1 00:12:16.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.226 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.226 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.226 00:12:16.226 Run status group 0 (all jobs): 00:12:16.226 READ: bw=9286KiB/s (9509kB/s), 87.1KiB/s-9111KiB/s (89.2kB/s-9330kB/s), io=9388KiB (9613kB), run=1001-1011msec 00:12:16.226 WRITE: bw=15.8MiB/s (16.6MB/s), 2026KiB/s-9.99MiB/s (2074kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1011msec 00:12:16.226 00:12:16.226 Disk stats (read/write): 00:12:16.226 nvme0n1: ios=67/512, merge=0/0, ticks=1655/88, in_queue=1743, util=90.28% 00:12:16.226 nvme0n2: ios=2022/2048, merge=0/0, ticks=579/271, in_queue=850, util=90.96% 00:12:16.226 nvme0n3: ios=61/512, merge=0/0, ticks=1663/95, in_queue=1758, util=97.09% 00:12:16.226 nvme0n4: ios=76/512, merge=0/0, ticks=1360/94, in_queue=1454, util=98.74% 00:12:16.226 10:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:16.226 [global] 00:12:16.226 thread=1 00:12:16.226 invalidate=1 00:12:16.226 rw=write 00:12:16.226 time_based=1 00:12:16.226 runtime=1 00:12:16.226 ioengine=libaio 00:12:16.226 direct=1 00:12:16.226 bs=4096 00:12:16.226 iodepth=128 00:12:16.226 norandommap=0 00:12:16.226 numjobs=1 00:12:16.226 00:12:16.226 verify_dump=1 00:12:16.226 verify_backlog=512 00:12:16.226 verify_state_save=0 00:12:16.226 do_verify=1 00:12:16.226 verify=crc32c-intel 00:12:16.226 [job0] 00:12:16.226 filename=/dev/nvme0n1 00:12:16.226 [job1] 00:12:16.226 filename=/dev/nvme0n2 00:12:16.226 [job2] 00:12:16.226 filename=/dev/nvme0n3 00:12:16.226 [job3] 00:12:16.226 filename=/dev/nvme0n4 00:12:16.226 Could not set queue depth (nvme0n1) 00:12:16.226 Could not set queue depth (nvme0n2) 00:12:16.226 Could not set queue depth (nvme0n3) 00:12:16.226 Could not set queue depth (nvme0n4) 00:12:16.226 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.226 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.226 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.226 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.226 fio-3.35 00:12:16.226 Starting 4 threads 00:12:17.598 00:12:17.598 job0: (groupid=0, jobs=1): err= 0: pid=3819868: Tue Nov 19 10:40:07 2024 00:12:17.598 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:12:17.598 slat (nsec): min=1142, max=16655k, avg=108711.71, stdev=848257.51 00:12:17.598 clat (usec): min=2379, max=91147, avg=13795.65, stdev=10250.84 00:12:17.598 lat (usec): min=2387, max=91156, avg=13904.37, stdev=10357.18 00:12:17.598 clat percentiles (usec): 00:12:17.598 | 1.00th=[ 4621], 5.00th=[ 5080], 10.00th=[ 5800], 20.00th=[ 8717], 00:12:17.598 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:12:17.598 | 70.00th=[13566], 80.00th=[17171], 90.00th=[23200], 95.00th=[31065], 00:12:17.598 | 99.00th=[70779], 99.50th=[80217], 99.90th=[90702], 99.95th=[90702], 00:12:17.598 | 99.99th=[90702] 00:12:17.598 write: IOPS=3621, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1005msec); 0 zone resets 00:12:17.598 slat (usec): min=2, max=41072, avg=135.46, stdev=1137.26 00:12:17.598 clat (usec): min=336, max=102664, avg=18959.16, stdev=21772.41 00:12:17.598 lat (usec): min=366, max=102685, avg=19094.62, stdev=21932.58 00:12:17.598 clat percentiles (usec): 00:12:17.598 | 1.00th=[ 1418], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 6456], 00:12:17.598 | 30.00th=[ 7373], 40.00th=[ 7767], 50.00th=[ 9896], 60.00th=[ 10552], 00:12:17.598 | 70.00th=[ 13173], 80.00th=[ 26608], 90.00th=[ 53216], 95.00th=[ 70779], 00:12:17.598 | 99.00th=[ 96994], 99.50th=[101188], 99.90th=[102237], 99.95th=[102237], 00:12:17.598 | 99.99th=[102237] 00:12:17.598 bw ( KiB/s): min= 8159, max=20496, per=22.76%, avg=14327.50, stdev=8723.58, samples=2 00:12:17.598 iops : min= 2039, max= 5124, avg=3581.50, stdev=2181.42, samples=2 00:12:17.598 lat (usec) : 500=0.04%, 750=0.25%, 1000=0.18% 00:12:17.598 lat (msec) : 2=0.33%, 4=1.73%, 10=37.61%, 20=40.75%, 50=12.82% 00:12:17.598 lat (msec) : 100=5.99%, 250=0.29% 00:12:17.598 cpu : usr=2.39%, sys=4.08%, ctx=316, majf=0, minf=1 00:12:17.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:17.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.598 issued rwts: total=3584,3640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.598 job1: (groupid=0, jobs=1): err= 0: pid=3819869: Tue Nov 19 10:40:07 2024 00:12:17.598 read: IOPS=3294, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1008msec) 00:12:17.598 slat (nsec): min=1669, max=15043k, avg=113385.11, stdev=874368.70 00:12:17.598 clat (usec): min=1935, max=53635, avg=14451.20, stdev=6714.93 00:12:17.598 lat (usec): min=1943, max=53638, avg=14564.59, stdev=6806.71 00:12:17.598 clat percentiles (usec): 00:12:17.598 | 1.00th=[ 3294], 5.00th=[ 5342], 10.00th=[ 8094], 20.00th=[10552], 00:12:17.598 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12387], 60.00th=[14353], 00:12:17.598 | 70.00th=[16581], 80.00th=[18220], 90.00th=[23200], 95.00th=[26346], 00:12:17.598 | 99.00th=[41157], 99.50th=[47449], 99.90th=[53740], 99.95th=[53740], 00:12:17.598 | 99.99th=[53740] 00:12:17.598 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:12:17.598 slat (usec): min=2, max=50833, avg=160.50, stdev=1401.86 00:12:17.598 clat (usec): min=518, max=102762, avg=19109.24, stdev=18308.95 00:12:17.598 lat (usec): min=581, max=109562, avg=19269.74, stdev=18493.02 00:12:17.598 clat percentiles (msec): 00:12:17.598 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 7], 20.00th=[ 9], 00:12:17.598 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 14], 00:12:17.598 | 70.00th=[ 21], 80.00th=[ 27], 90.00th=[ 44], 95.00th=[ 57], 00:12:17.598 | 99.00th=[ 95], 99.50th=[ 99], 99.90th=[ 103], 99.95th=[ 104], 00:12:17.598 | 99.99th=[ 104] 00:12:17.598 bw ( KiB/s): min=11784, max=16854, per=22.74%, avg=14319.00, stdev=3585.03, samples=2 00:12:17.598 iops : min= 2946, max= 4213, avg=3579.50, stdev=895.90, samples=2 00:12:17.598 lat (usec) : 750=0.01% 00:12:17.598 lat (msec) : 2=0.41%, 4=4.16%, 10=23.61%, 20=49.69%, 50=18.41% 00:12:17.598 lat (msec) : 100=3.52%, 250=0.20% 00:12:17.598 cpu : usr=3.08%, sys=4.17%, ctx=240, majf=0, minf=1 00:12:17.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:17.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.599 issued rwts: total=3321,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.599 job2: (groupid=0, jobs=1): err= 0: pid=3819871: Tue Nov 19 10:40:07 2024 00:12:17.599 read: IOPS=5735, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1004msec) 00:12:17.599 slat (nsec): min=1106, max=10673k, avg=77164.56, stdev=523565.40 00:12:17.599 clat (usec): min=1102, max=64676, avg=11047.23, stdev=5990.37 00:12:17.599 lat (usec): min=1126, max=69982, avg=11124.39, stdev=6013.65 00:12:17.599 clat percentiles (usec): 00:12:17.599 | 1.00th=[ 2073], 5.00th=[ 6849], 10.00th=[ 7439], 20.00th=[ 8455], 00:12:17.599 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[10683], 00:12:17.599 | 70.00th=[11207], 80.00th=[11731], 90.00th=[14615], 95.00th=[17433], 00:12:17.599 | 99.00th=[36439], 99.50th=[60031], 99.90th=[63701], 99.95th=[63701], 00:12:17.599 | 99.99th=[64750] 00:12:17.599 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:12:17.599 slat (nsec): min=1837, max=13722k, avg=79557.78, stdev=606686.38 00:12:17.599 clat (usec): min=693, max=68402, avg=10379.73, stdev=7373.65 00:12:17.599 lat (usec): min=701, max=68411, avg=10459.29, stdev=7410.24 00:12:17.599 clat percentiles (usec): 00:12:17.599 | 1.00th=[ 2212], 5.00th=[ 3916], 10.00th=[ 5538], 20.00th=[ 7242], 00:12:17.599 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9241], 00:12:17.599 | 70.00th=[10159], 80.00th=[11207], 90.00th=[13698], 95.00th=[24773], 00:12:17.599 | 99.00th=[53740], 99.50th=[58459], 99.90th=[64750], 99.95th=[66323], 00:12:17.599 | 99.99th=[68682] 00:12:17.599 bw ( KiB/s): min=20464, max=28614, per=38.98%, avg=24539.00, stdev=5762.92, samples=2 00:12:17.599 iops : min= 5116, max= 7153, avg=6134.50, stdev=1440.38, samples=2 00:12:17.599 lat (usec) : 750=0.03%, 1000=0.02% 00:12:17.599 lat (msec) : 2=0.66%, 4=3.40%, 10=53.08%, 20=37.62%, 50=4.16% 00:12:17.599 lat (msec) : 100=1.04% 00:12:17.599 cpu : usr=4.19%, sys=6.78%, ctx=582, majf=0, minf=2 00:12:17.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:17.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.599 issued rwts: total=5758,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.599 job3: (groupid=0, jobs=1): err= 0: pid=3819872: Tue Nov 19 10:40:07 2024 00:12:17.599 read: IOPS=2108, BW=8435KiB/s (8637kB/s)(8536KiB/1012msec) 00:12:17.599 slat (nsec): min=1894, max=22329k, avg=186817.75, stdev=1214959.10 00:12:17.599 clat (usec): min=5930, max=87727, avg=19546.16, stdev=16930.97 00:12:17.599 lat (usec): min=5936, max=87738, avg=19732.98, stdev=17073.01 00:12:17.599 clat percentiles (usec): 00:12:17.599 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10683], 00:12:17.599 | 30.00th=[10814], 40.00th=[11600], 50.00th=[11600], 60.00th=[14222], 00:12:17.599 | 70.00th=[17433], 80.00th=[19530], 90.00th=[50070], 95.00th=[64750], 00:12:17.599 | 99.00th=[77071], 99.50th=[85459], 99.90th=[87557], 99.95th=[87557], 00:12:17.599 | 99.99th=[87557] 00:12:17.599 write: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec); 0 zone resets 00:12:17.599 slat (usec): min=2, max=17812, avg=227.56, stdev=1175.40 00:12:17.599 clat (msec): min=3, max=154, avg=33.81, stdev=32.58 00:12:17.599 lat (msec): min=3, max=154, avg=34.04, stdev=32.78 00:12:17.599 clat percentiles (msec): 00:12:17.599 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:12:17.599 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 24], 60.00th=[ 27], 00:12:17.599 | 70.00th=[ 42], 80.00th=[ 54], 90.00th=[ 89], 95.00th=[ 106], 00:12:17.599 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 155], 00:12:17.599 | 99.99th=[ 155] 00:12:17.599 bw ( KiB/s): min= 8367, max=11768, per=15.99%, avg=10067.50, stdev=2404.87, samples=2 00:12:17.599 iops : min= 2091, max= 2942, avg=2516.50, stdev=601.75, samples=2 00:12:17.599 lat (msec) : 4=0.13%, 10=21.43%, 20=40.22%, 50=21.82%, 100=12.91% 00:12:17.599 lat (msec) : 250=3.49% 00:12:17.599 cpu : usr=2.18%, sys=4.15%, ctx=236, majf=0, minf=1 00:12:17.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:12:17.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.599 issued rwts: total=2134,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.599 00:12:17.599 Run status group 0 (all jobs): 00:12:17.599 READ: bw=57.1MiB/s (59.9MB/s), 8435KiB/s-22.4MiB/s (8637kB/s-23.5MB/s), io=57.8MiB (60.6MB), run=1004-1012msec 00:12:17.599 WRITE: bw=61.5MiB/s (64.5MB/s), 9.88MiB/s-23.9MiB/s (10.4MB/s-25.1MB/s), io=62.2MiB (65.2MB), run=1004-1012msec 00:12:17.599 00:12:17.599 Disk stats (read/write): 00:12:17.599 nvme0n1: ios=2581/2844, merge=0/0, ticks=36242/58275, in_queue=94517, util=91.18% 00:12:17.599 nvme0n2: ios=2585/2959, merge=0/0, ticks=35484/57240, in_queue=92724, util=95.43% 00:12:17.599 nvme0n3: ios=5057/5120, merge=0/0, ticks=36760/38807, in_queue=75567, util=95.64% 00:12:17.599 nvme0n4: ios=2065/2287, merge=0/0, ticks=36584/68642, in_queue=105226, util=98.54% 00:12:17.599 10:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:17.599 [global] 00:12:17.599 thread=1 00:12:17.599 invalidate=1 00:12:17.599 rw=randwrite 00:12:17.599 time_based=1 00:12:17.599 runtime=1 00:12:17.599 ioengine=libaio 00:12:17.599 direct=1 00:12:17.599 bs=4096 00:12:17.599 iodepth=128 00:12:17.599 norandommap=0 00:12:17.599 numjobs=1 00:12:17.599 00:12:17.599 verify_dump=1 00:12:17.599 verify_backlog=512 00:12:17.599 verify_state_save=0 00:12:17.599 do_verify=1 00:12:17.599 verify=crc32c-intel 00:12:17.599 [job0] 00:12:17.599 filename=/dev/nvme0n1 00:12:17.599 [job1] 00:12:17.599 filename=/dev/nvme0n2 00:12:17.599 [job2] 00:12:17.599 filename=/dev/nvme0n3 00:12:17.599 [job3] 00:12:17.599 filename=/dev/nvme0n4 00:12:17.599 Could not set queue depth (nvme0n1) 00:12:17.599 Could not set queue depth (nvme0n2) 00:12:17.599 Could not set queue depth (nvme0n3) 00:12:17.599 Could not set queue depth (nvme0n4) 00:12:17.856 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.856 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.856 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.856 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.856 fio-3.35 00:12:17.856 Starting 4 threads 00:12:19.226 00:12:19.226 job0: (groupid=0, jobs=1): err= 0: pid=3820244: Tue Nov 19 10:40:08 2024 00:12:19.226 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:12:19.226 slat (nsec): min=1047, max=12170k, avg=156019.54, stdev=905805.00 00:12:19.226 clat (usec): min=7000, max=69003, avg=19078.89, stdev=11577.33 00:12:19.226 lat (usec): min=7003, max=69005, avg=19234.91, stdev=11648.24 00:12:19.226 clat percentiles (usec): 00:12:19.226 | 1.00th=[ 7373], 5.00th=[ 8094], 10.00th=[ 9765], 20.00th=[10421], 00:12:19.226 | 30.00th=[10683], 40.00th=[11600], 50.00th=[13960], 60.00th=[18482], 00:12:19.226 | 70.00th=[21627], 80.00th=[26608], 90.00th=[38536], 95.00th=[46400], 00:12:19.226 | 99.00th=[51119], 99.50th=[51643], 99.90th=[68682], 99.95th=[68682], 00:12:19.226 | 99.99th=[68682] 00:12:19.226 write: IOPS=3626, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1002msec); 0 zone resets 00:12:19.226 slat (nsec): min=1815, max=19351k, avg=116374.49, stdev=672985.56 00:12:19.226 clat (usec): min=409, max=34653, avg=15991.24, stdev=6916.12 00:12:19.226 lat (usec): min=3620, max=34661, avg=16107.62, stdev=6941.08 00:12:19.226 clat percentiles (usec): 00:12:19.226 | 1.00th=[ 4113], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10683], 00:12:19.226 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12256], 60.00th=[14615], 00:12:19.226 | 70.00th=[18220], 80.00th=[23725], 90.00th=[26870], 95.00th=[30540], 00:12:19.226 | 99.00th=[32637], 99.50th=[33817], 99.90th=[34341], 99.95th=[34866], 00:12:19.226 | 99.99th=[34866] 00:12:19.226 bw ( KiB/s): min=12288, max=12288, per=18.22%, avg=12288.00, stdev= 0.00, samples=1 00:12:19.226 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:19.226 lat (usec) : 500=0.01% 00:12:19.226 lat (msec) : 4=0.40%, 10=9.35%, 20=60.00%, 50=29.34%, 100=0.89% 00:12:19.226 cpu : usr=1.40%, sys=3.80%, ctx=449, majf=0, minf=1 00:12:19.226 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:19.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.226 issued rwts: total=3584,3634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.226 job1: (groupid=0, jobs=1): err= 0: pid=3820245: Tue Nov 19 10:40:08 2024 00:12:19.226 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:12:19.226 slat (nsec): min=1097, max=11095k, avg=91710.92, stdev=587759.89 00:12:19.226 clat (usec): min=2333, max=54345, avg=12342.35, stdev=3477.81 00:12:19.226 lat (usec): min=2340, max=54350, avg=12434.06, stdev=3506.04 00:12:19.226 clat percentiles (usec): 00:12:19.226 | 1.00th=[ 5211], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[10028], 00:12:19.226 | 30.00th=[10421], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:12:19.226 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15664], 95.00th=[17171], 00:12:19.226 | 99.00th=[19792], 99.50th=[21365], 99.90th=[47973], 99.95th=[47973], 00:12:19.226 | 99.99th=[54264] 00:12:19.226 write: IOPS=5281, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1004msec); 0 zone resets 00:12:19.226 slat (nsec): min=1813, max=7550.5k, avg=94084.25, stdev=553905.20 00:12:19.226 clat (usec): min=958, max=29046, avg=12004.51, stdev=3703.46 00:12:19.226 lat (usec): min=970, max=29052, avg=12098.59, stdev=3741.03 00:12:19.226 clat percentiles (usec): 00:12:19.226 | 1.00th=[ 5866], 5.00th=[ 7177], 10.00th=[ 8717], 20.00th=[10159], 00:12:19.226 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11863], 00:12:19.226 | 70.00th=[12256], 80.00th=[13042], 90.00th=[15533], 95.00th=[20579], 00:12:19.226 | 99.00th=[26870], 99.50th=[28181], 99.90th=[28967], 99.95th=[28967], 00:12:19.226 | 99.99th=[28967] 00:12:19.226 bw ( KiB/s): min=20480, max=20928, per=30.69%, avg=20704.00, stdev=316.78, samples=2 00:12:19.226 iops : min= 5120, max= 5232, avg=5176.00, stdev=79.20, samples=2 00:12:19.226 lat (usec) : 1000=0.07% 00:12:19.226 lat (msec) : 2=0.01%, 4=0.14%, 10=17.65%, 20=78.83%, 50=3.29% 00:12:19.226 lat (msec) : 100=0.01% 00:12:19.226 cpu : usr=4.09%, sys=5.48%, ctx=374, majf=0, minf=2 00:12:19.226 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:19.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.226 issued rwts: total=5120,5303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.226 job2: (groupid=0, jobs=1): err= 0: pid=3820246: Tue Nov 19 10:40:08 2024 00:12:19.226 read: IOPS=4238, BW=16.6MiB/s (17.4MB/s)(17.3MiB/1046msec) 00:12:19.226 slat (nsec): min=1086, max=8465.9k, avg=118093.34, stdev=673657.69 00:12:19.226 clat (usec): min=7262, max=61919, avg=15950.76, stdev=8455.73 00:12:19.226 lat (usec): min=7266, max=68842, avg=16068.85, stdev=8490.70 00:12:19.226 clat percentiles (usec): 00:12:19.226 | 1.00th=[ 8094], 5.00th=[10028], 10.00th=[10945], 20.00th=[11731], 00:12:19.226 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13042], 60.00th=[13435], 00:12:19.226 | 70.00th=[15008], 80.00th=[18482], 90.00th=[23200], 95.00th=[31327], 00:12:19.226 | 99.00th=[54789], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:12:19.226 | 99.99th=[62129] 00:12:19.226 write: IOPS=4405, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1046msec); 0 zone resets 00:12:19.226 slat (nsec): min=1788, max=12760k, avg=99424.32, stdev=582025.15 00:12:19.226 clat (usec): min=5389, max=30016, avg=13407.92, stdev=3252.20 00:12:19.226 lat (usec): min=6433, max=30023, avg=13507.34, stdev=3287.19 00:12:19.226 clat percentiles (usec): 00:12:19.226 | 1.00th=[ 7832], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11600], 00:12:19.226 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12780], 60.00th=[13304], 00:12:19.226 | 70.00th=[13829], 80.00th=[14484], 90.00th=[16319], 95.00th=[18220], 00:12:19.226 | 99.00th=[29492], 99.50th=[30016], 99.90th=[30016], 99.95th=[30016], 00:12:19.226 | 99.99th=[30016] 00:12:19.226 bw ( KiB/s): min=17792, max=19072, per=27.32%, avg=18432.00, stdev=905.10, samples=2 00:12:19.226 iops : min= 4448, max= 4768, avg=4608.00, stdev=226.27, samples=2 00:12:19.226 lat (msec) : 10=5.20%, 20=84.57%, 50=9.30%, 100=0.93% 00:12:19.226 cpu : usr=2.30%, sys=4.50%, ctx=460, majf=0, minf=1 00:12:19.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:19.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.227 issued rwts: total=4433,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.227 job3: (groupid=0, jobs=1): err= 0: pid=3820247: Tue Nov 19 10:40:08 2024 00:12:19.227 read: IOPS=3771, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1006msec) 00:12:19.227 slat (nsec): min=1093, max=8942.9k, avg=122626.64, stdev=689589.46 00:12:19.227 clat (usec): min=407, max=35901, avg=14697.65, stdev=4967.79 00:12:19.227 lat (usec): min=6111, max=35912, avg=14820.28, stdev=5026.84 00:12:19.227 clat percentiles (usec): 00:12:19.227 | 1.00th=[ 6456], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11600], 00:12:19.227 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[13304], 00:12:19.227 | 70.00th=[15008], 80.00th=[18744], 90.00th=[22938], 95.00th=[25035], 00:12:19.227 | 99.00th=[28705], 99.50th=[31327], 99.90th=[32113], 99.95th=[33817], 00:12:19.227 | 99.99th=[35914] 00:12:19.227 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:12:19.227 slat (nsec): min=1845, max=25515k, avg=126295.99, stdev=786614.66 00:12:19.227 clat (usec): min=7744, max=62893, avg=17258.24, stdev=11348.06 00:12:19.227 lat (usec): min=7767, max=62897, avg=17384.54, stdev=11420.16 00:12:19.227 clat percentiles (usec): 00:12:19.227 | 1.00th=[ 8094], 5.00th=[10683], 10.00th=[11076], 20.00th=[11338], 00:12:19.227 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[13304], 00:12:19.227 | 70.00th=[14484], 80.00th=[21890], 90.00th=[32375], 95.00th=[49546], 00:12:19.227 | 99.00th=[59507], 99.50th=[60031], 99.90th=[62653], 99.95th=[62653], 00:12:19.227 | 99.99th=[62653] 00:12:19.227 bw ( KiB/s): min=12288, max=20480, per=24.29%, avg=16384.00, stdev=5792.62, samples=2 00:12:19.227 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:12:19.227 lat (usec) : 500=0.01% 00:12:19.227 lat (msec) : 10=4.59%, 20=74.90%, 50=18.45%, 100=2.04% 00:12:19.227 cpu : usr=3.08%, sys=4.28%, ctx=444, majf=0, minf=2 00:12:19.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:19.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.227 issued rwts: total=3794,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.227 00:12:19.227 Run status group 0 (all jobs): 00:12:19.227 READ: bw=63.2MiB/s (66.3MB/s), 14.0MiB/s-19.9MiB/s (14.7MB/s-20.9MB/s), io=66.1MiB (69.3MB), run=1002-1046msec 00:12:19.227 WRITE: bw=65.9MiB/s (69.1MB/s), 14.2MiB/s-20.6MiB/s (14.9MB/s-21.6MB/s), io=68.9MiB (72.3MB), run=1002-1046msec 00:12:19.227 00:12:19.227 Disk stats (read/write): 00:12:19.227 nvme0n1: ios=2584/2895, merge=0/0, ticks=17802/12302, in_queue=30104, util=88.68% 00:12:19.227 nvme0n2: ios=4145/4520, merge=0/0, ticks=24066/23703, in_queue=47769, util=94.01% 00:12:19.227 nvme0n3: ios=3675/4096, merge=0/0, ticks=17009/18955, in_queue=35964, util=98.86% 00:12:19.227 nvme0n4: ios=3605/3711, merge=0/0, ticks=18134/17872, in_queue=36006, util=93.40% 00:12:19.227 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:19.227 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3820475 00:12:19.227 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:19.227 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:19.227 [global] 00:12:19.227 thread=1 00:12:19.227 invalidate=1 00:12:19.227 rw=read 00:12:19.227 time_based=1 00:12:19.227 runtime=10 00:12:19.227 ioengine=libaio 00:12:19.227 direct=1 00:12:19.227 bs=4096 00:12:19.227 iodepth=1 00:12:19.227 norandommap=1 00:12:19.227 numjobs=1 00:12:19.227 00:12:19.227 [job0] 00:12:19.227 filename=/dev/nvme0n1 00:12:19.227 [job1] 00:12:19.227 filename=/dev/nvme0n2 00:12:19.227 [job2] 00:12:19.227 filename=/dev/nvme0n3 00:12:19.227 [job3] 00:12:19.227 filename=/dev/nvme0n4 00:12:19.227 Could not set queue depth (nvme0n1) 00:12:19.227 Could not set queue depth (nvme0n2) 00:12:19.227 Could not set queue depth (nvme0n3) 00:12:19.227 Could not set queue depth (nvme0n4) 00:12:19.485 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.485 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.485 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.485 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.485 fio-3.35 00:12:19.485 Starting 4 threads 00:12:22.008 10:40:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:22.265 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=30920704, buflen=4096 00:12:22.265 fio: pid=3820620, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:22.265 10:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:22.522 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=39157760, buflen=4096 00:12:22.522 fio: pid=3820619, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:22.522 10:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:22.522 10:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:22.779 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=311296, buflen=4096 00:12:22.779 fio: pid=3820615, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:22.779 10:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:22.779 10:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:23.037 10:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.037 10:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:23.037 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=4694016, buflen=4096 00:12:23.037 fio: pid=3820616, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:12:23.037 00:12:23.037 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3820615: Tue Nov 19 10:40:12 2024 00:12:23.037 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(304KiB/3134msec) 00:12:23.037 slat (usec): min=11, max=14816, avg=221.13, stdev=1686.16 00:12:23.037 clat (usec): min=420, max=45193, avg=40720.82, stdev=4726.45 00:12:23.037 lat (usec): min=480, max=57044, avg=40938.12, stdev=5076.71 00:12:23.037 clat percentiles (usec): 00:12:23.037 | 1.00th=[ 420], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:23.037 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:23.037 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:23.037 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:12:23.037 | 99.99th=[45351] 00:12:23.037 bw ( KiB/s): min= 93, max= 104, per=0.44%, avg=96.83, stdev= 3.71, samples=6 00:12:23.037 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:12:23.037 lat (usec) : 500=1.30% 00:12:23.037 lat (msec) : 50=97.40% 00:12:23.037 cpu : usr=0.10%, sys=0.00%, ctx=81, majf=0, minf=1 00:12:23.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.037 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.037 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.037 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3820616: Tue Nov 19 10:40:12 2024 00:12:23.037 read: IOPS=342, BW=1369KiB/s (1402kB/s)(4584KiB/3349msec) 00:12:23.037 slat (usec): min=6, max=14738, avg=33.13, stdev=501.57 00:12:23.037 clat (usec): min=164, max=42409, avg=2886.44, stdev=10112.67 00:12:23.037 lat (usec): min=185, max=55910, avg=2913.45, stdev=10190.98 00:12:23.037 clat percentiles (usec): 00:12:23.037 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:12:23.037 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:12:23.037 | 70.00th=[ 217], 80.00th=[ 231], 90.00th=[ 265], 95.00th=[41157], 00:12:23.037 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:23.037 | 99.99th=[42206] 00:12:23.037 bw ( KiB/s): min= 93, max= 8616, per=6.92%, avg=1516.83, stdev=3477.87, samples=6 00:12:23.037 iops : min= 23, max= 2154, avg=379.17, stdev=869.49, samples=6 00:12:23.037 lat (usec) : 250=86.40%, 500=6.97% 00:12:23.037 lat (msec) : 50=6.54% 00:12:23.037 cpu : usr=0.33%, sys=0.72%, ctx=1150, majf=0, minf=2 00:12:23.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.037 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.037 issued rwts: total=1147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.037 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3820619: Tue Nov 19 10:40:12 2024 00:12:23.037 read: IOPS=3270, BW=12.8MiB/s (13.4MB/s)(37.3MiB/2923msec) 00:12:23.037 slat (usec): min=6, max=15301, avg=10.23, stdev=196.61 00:12:23.037 clat (usec): min=151, max=42374, avg=291.69, stdev=1965.75 00:12:23.037 lat (usec): min=158, max=42382, avg=301.92, stdev=1976.37 00:12:23.037 clat percentiles (usec): 00:12:23.037 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:12:23.037 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:12:23.037 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 227], 95.00th=[ 247], 00:12:23.037 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[41157], 99.95th=[41681], 00:12:23.037 | 99.99th=[42206] 00:12:23.037 bw ( KiB/s): min= 4128, max=19544, per=55.91%, avg=12241.60, stdev=6981.19, samples=5 00:12:23.037 iops : min= 1032, max= 4886, avg=3060.40, stdev=1745.30, samples=5 00:12:23.037 lat (usec) : 250=95.60%, 500=4.13%, 1000=0.01% 00:12:23.037 lat (msec) : 4=0.02%, 50=0.23% 00:12:23.037 cpu : usr=1.03%, sys=2.77%, ctx=9566, majf=0, minf=2 00:12:23.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.038 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.038 issued rwts: total=9561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.038 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3820620: Tue Nov 19 10:40:12 2024 00:12:23.038 read: IOPS=2791, BW=10.9MiB/s (11.4MB/s)(29.5MiB/2705msec) 00:12:23.038 slat (nsec): min=6656, max=32563, avg=7679.38, stdev=1323.65 00:12:23.038 clat (usec): min=149, max=42039, avg=347.75, stdev=2494.20 00:12:23.038 lat (usec): min=157, max=42061, avg=355.43, stdev=2494.86 00:12:23.038 clat percentiles (usec): 00:12:23.038 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:12:23.038 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:12:23.038 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 243], 00:12:23.038 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[41681], 99.95th=[41681], 00:12:23.038 | 99.99th=[42206] 00:12:23.038 bw ( KiB/s): min= 104, max=19736, per=48.36%, avg=10587.20, stdev=9908.93, samples=5 00:12:23.038 iops : min= 26, max= 4936, avg=2647.20, stdev=2477.69, samples=5 00:12:23.038 lat (usec) : 250=96.68%, 500=2.91%, 750=0.01% 00:12:23.038 lat (msec) : 4=0.01%, 50=0.37% 00:12:23.038 cpu : usr=1.04%, sys=2.40%, ctx=7550, majf=0, minf=2 00:12:23.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.038 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.038 issued rwts: total=7550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.038 00:12:23.038 Run status group 0 (all jobs): 00:12:23.038 READ: bw=21.4MiB/s (22.4MB/s), 97.0KiB/s-12.8MiB/s (99.3kB/s-13.4MB/s), io=71.6MiB (75.1MB), run=2705-3349msec 00:12:23.038 00:12:23.038 Disk stats (read/write): 00:12:23.038 nvme0n1: ios=75/0, merge=0/0, ticks=3052/0, in_queue=3052, util=95.22% 00:12:23.038 nvme0n2: ios=1140/0, merge=0/0, ticks=3056/0, in_queue=3056, util=95.95% 00:12:23.038 nvme0n3: ios=9338/0, merge=0/0, ticks=3378/0, in_queue=3378, util=99.22% 00:12:23.038 nvme0n4: ios=7100/0, merge=0/0, ticks=2517/0, in_queue=2517, util=96.44% 00:12:23.295 10:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.295 10:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:23.295 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.295 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:23.552 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.552 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:23.810 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.810 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3820475 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:24.068 nvmf hotplug test: fio failed as expected 00:12:24.068 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.326 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:24.326 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:24.326 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:24.326 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:24.326 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:24.326 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:24.326 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:24.326 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.326 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:24.326 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.326 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.326 rmmod nvme_tcp 00:12:24.326 rmmod nvme_fabrics 00:12:24.326 rmmod nvme_keyring 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3817542 ']' 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3817542 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3817542 ']' 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3817542 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3817542 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3817542' 00:12:24.326 killing process with pid 3817542 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3817542 00:12:24.326 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3817542 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.585 10:40:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:27.168 00:12:27.168 real 0m27.423s 00:12:27.168 user 1m48.041s 00:12:27.168 sys 0m8.426s 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.168 ************************************ 00:12:27.168 END TEST nvmf_fio_target 00:12:27.168 ************************************ 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:27.168 ************************************ 00:12:27.168 START TEST nvmf_bdevio 00:12:27.168 ************************************ 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:27.168 * Looking for test storage... 00:12:27.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.168 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:27.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.169 --rc genhtml_branch_coverage=1 00:12:27.169 --rc genhtml_function_coverage=1 00:12:27.169 --rc genhtml_legend=1 00:12:27.169 --rc geninfo_all_blocks=1 00:12:27.169 --rc geninfo_unexecuted_blocks=1 00:12:27.169 00:12:27.169 ' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:27.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.169 --rc genhtml_branch_coverage=1 00:12:27.169 --rc genhtml_function_coverage=1 00:12:27.169 --rc genhtml_legend=1 00:12:27.169 --rc geninfo_all_blocks=1 00:12:27.169 --rc geninfo_unexecuted_blocks=1 00:12:27.169 00:12:27.169 ' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:27.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.169 --rc genhtml_branch_coverage=1 00:12:27.169 --rc genhtml_function_coverage=1 00:12:27.169 --rc genhtml_legend=1 00:12:27.169 --rc geninfo_all_blocks=1 00:12:27.169 --rc geninfo_unexecuted_blocks=1 00:12:27.169 00:12:27.169 ' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:27.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.169 --rc genhtml_branch_coverage=1 00:12:27.169 --rc genhtml_function_coverage=1 00:12:27.169 --rc genhtml_legend=1 00:12:27.169 --rc geninfo_all_blocks=1 00:12:27.169 --rc geninfo_unexecuted_blocks=1 00:12:27.169 00:12:27.169 ' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.169 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.170 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:27.170 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:27.170 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.170 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:33.737 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:33.737 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:33.737 Found net devices under 0000:86:00.0: cvl_0_0 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:33.737 Found net devices under 0000:86:00.1: cvl_0_1 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:12:33.737 00:12:33.737 --- 10.0.0.2 ping statistics --- 00:12:33.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.737 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:12:33.737 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:33.737 00:12:33.737 --- 10.0.0.1 ping statistics --- 00:12:33.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.737 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3824959 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3824959 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3824959 ']' 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 [2024-11-19 10:40:22.696219] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:12:33.738 [2024-11-19 10:40:22.696267] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.738 [2024-11-19 10:40:22.776413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.738 [2024-11-19 10:40:22.816262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.738 [2024-11-19 10:40:22.816305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.738 [2024-11-19 10:40:22.816312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.738 [2024-11-19 10:40:22.816318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.738 [2024-11-19 10:40:22.816323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.738 [2024-11-19 10:40:22.817935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:33.738 [2024-11-19 10:40:22.818040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:33.738 [2024-11-19 10:40:22.818128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.738 [2024-11-19 10:40:22.818128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 [2024-11-19 10:40:22.965945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.738 10:40:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 Malloc0 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.738 [2024-11-19 10:40:23.028484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:33.738 { 00:12:33.738 "params": { 00:12:33.738 "name": "Nvme$subsystem", 00:12:33.738 "trtype": "$TEST_TRANSPORT", 00:12:33.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.738 "adrfam": "ipv4", 00:12:33.738 "trsvcid": "$NVMF_PORT", 00:12:33.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.738 "hdgst": ${hdgst:-false}, 00:12:33.738 "ddgst": ${ddgst:-false} 00:12:33.738 }, 00:12:33.738 "method": "bdev_nvme_attach_controller" 00:12:33.738 } 00:12:33.738 EOF 00:12:33.738 )") 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:33.738 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:33.738 "params": { 00:12:33.738 "name": "Nvme1", 00:12:33.738 "trtype": "tcp", 00:12:33.738 "traddr": "10.0.0.2", 00:12:33.738 "adrfam": "ipv4", 00:12:33.738 "trsvcid": "4420", 00:12:33.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.738 "hdgst": false, 00:12:33.738 "ddgst": false 00:12:33.738 }, 00:12:33.738 "method": "bdev_nvme_attach_controller" 00:12:33.738 }' 00:12:33.738 [2024-11-19 10:40:23.080769] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:12:33.738 [2024-11-19 10:40:23.080811] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3825107 ] 00:12:33.738 [2024-11-19 10:40:23.157760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.738 [2024-11-19 10:40:23.202239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.738 [2024-11-19 10:40:23.202295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.738 [2024-11-19 10:40:23.202295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.738 I/O targets: 00:12:33.738 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:33.738 00:12:33.738 00:12:33.738 CUnit - A unit testing framework for C - Version 2.1-3 00:12:33.738 http://cunit.sourceforge.net/ 00:12:33.738 00:12:33.738 00:12:33.738 Suite: bdevio tests on: Nvme1n1 00:12:33.738 Test: blockdev write read block ...passed 00:12:33.738 Test: blockdev write zeroes read block ...passed 00:12:33.738 Test: blockdev write zeroes read no split ...passed 00:12:33.738 Test: blockdev write zeroes read split ...passed 00:12:33.738 Test: blockdev write zeroes read split partial ...passed 00:12:33.738 Test: blockdev reset ...[2024-11-19 10:40:23.480562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:33.738 [2024-11-19 10:40:23.480625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57a340 (9): Bad file descriptor 00:12:33.738 [2024-11-19 10:40:23.496068] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:33.738 passed 00:12:33.738 Test: blockdev write read 8 blocks ...passed 00:12:33.738 Test: blockdev write read size > 128k ...passed 00:12:33.738 Test: blockdev write read invalid size ...passed 00:12:34.005 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:34.005 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:34.005 Test: blockdev write read max offset ...passed 00:12:34.005 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:34.005 Test: blockdev writev readv 8 blocks ...passed 00:12:34.005 Test: blockdev writev readv 30 x 1block ...passed 00:12:34.005 Test: blockdev writev readv block ...passed 00:12:34.005 Test: blockdev writev readv size > 128k ...passed 00:12:34.005 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:34.005 Test: blockdev comparev and writev ...[2024-11-19 10:40:23.664876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.005 [2024-11-19 10:40:23.664906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:34.005 [2024-11-19 10:40:23.664921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.005 [2024-11-19 10:40:23.664929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:34.005 [2024-11-19 10:40:23.665182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.005 [2024-11-19 10:40:23.665194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:34.005 [2024-11-19 10:40:23.665210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.005 [2024-11-19 10:40:23.665217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:34.005 [2024-11-19 10:40:23.665441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.005 [2024-11-19 10:40:23.665452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:34.005 [2024-11-19 10:40:23.665464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.005 [2024-11-19 10:40:23.665471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:34.005 [2024-11-19 10:40:23.665703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.005 [2024-11-19 10:40:23.665718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:34.005 [2024-11-19 10:40:23.665729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.005 [2024-11-19 10:40:23.665736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:34.005 passed 00:12:34.005 Test: blockdev nvme passthru rw ...passed 00:12:34.005 Test: blockdev nvme passthru vendor specific ...[2024-11-19 10:40:23.748584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.005 [2024-11-19 10:40:23.748601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:34.005 [2024-11-19 10:40:23.748706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.005 [2024-11-19 10:40:23.748716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:34.005 [2024-11-19 10:40:23.748813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.005 [2024-11-19 10:40:23.748823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:34.005 [2024-11-19 10:40:23.748924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.005 [2024-11-19 10:40:23.748934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:34.005 passed 00:12:34.005 Test: blockdev nvme admin passthru ...passed 00:12:34.283 Test: blockdev copy ...passed 00:12:34.283 00:12:34.283 Run Summary: Type Total Ran Passed Failed Inactive 00:12:34.283 suites 1 1 n/a 0 0 00:12:34.283 tests 23 23 23 0 0 00:12:34.283 asserts 152 152 152 0 n/a 00:12:34.283 00:12:34.283 Elapsed time = 0.898 seconds 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.283 10:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.283 rmmod nvme_tcp 00:12:34.283 rmmod nvme_fabrics 00:12:34.283 rmmod nvme_keyring 00:12:34.283 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.283 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:34.283 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:34.283 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3824959 ']' 00:12:34.283 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3824959 00:12:34.283 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3824959 ']' 00:12:34.283 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3824959 00:12:34.283 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:34.283 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.283 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3824959 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3824959' 00:12:34.588 killing process with pid 3824959 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3824959 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3824959 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.588 10:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.137 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.137 00:12:37.137 real 0m9.932s 00:12:37.137 user 0m9.223s 00:12:37.137 sys 0m4.994s 00:12:37.137 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.137 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:37.137 ************************************ 00:12:37.137 END TEST nvmf_bdevio 00:12:37.137 ************************************ 00:12:37.137 10:40:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:37.137 00:12:37.137 real 4m39.025s 00:12:37.137 user 10m28.880s 00:12:37.138 sys 1m38.379s 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:37.138 ************************************ 00:12:37.138 END TEST nvmf_target_core 00:12:37.138 ************************************ 00:12:37.138 10:40:26 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:37.138 10:40:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.138 10:40:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.138 10:40:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.138 ************************************ 00:12:37.138 START TEST nvmf_target_extra 00:12:37.138 ************************************ 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:37.138 * Looking for test storage... 00:12:37.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.138 --rc genhtml_branch_coverage=1 00:12:37.138 --rc genhtml_function_coverage=1 00:12:37.138 --rc genhtml_legend=1 00:12:37.138 --rc geninfo_all_blocks=1 00:12:37.138 --rc geninfo_unexecuted_blocks=1 00:12:37.138 00:12:37.138 ' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.138 --rc genhtml_branch_coverage=1 00:12:37.138 --rc genhtml_function_coverage=1 00:12:37.138 --rc genhtml_legend=1 00:12:37.138 --rc geninfo_all_blocks=1 00:12:37.138 --rc geninfo_unexecuted_blocks=1 00:12:37.138 00:12:37.138 ' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.138 --rc genhtml_branch_coverage=1 00:12:37.138 --rc genhtml_function_coverage=1 00:12:37.138 --rc genhtml_legend=1 00:12:37.138 --rc geninfo_all_blocks=1 00:12:37.138 --rc geninfo_unexecuted_blocks=1 00:12:37.138 00:12:37.138 ' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.138 --rc genhtml_branch_coverage=1 00:12:37.138 --rc genhtml_function_coverage=1 00:12:37.138 --rc genhtml_legend=1 00:12:37.138 --rc geninfo_all_blocks=1 00:12:37.138 --rc geninfo_unexecuted_blocks=1 00:12:37.138 00:12:37.138 ' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.138 ************************************ 00:12:37.138 START TEST nvmf_example 00:12:37.138 ************************************ 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:37.138 * Looking for test storage... 00:12:37.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.138 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.139 --rc genhtml_branch_coverage=1 00:12:37.139 --rc genhtml_function_coverage=1 00:12:37.139 --rc genhtml_legend=1 00:12:37.139 --rc geninfo_all_blocks=1 00:12:37.139 --rc geninfo_unexecuted_blocks=1 00:12:37.139 00:12:37.139 ' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.139 --rc genhtml_branch_coverage=1 00:12:37.139 --rc genhtml_function_coverage=1 00:12:37.139 --rc genhtml_legend=1 00:12:37.139 --rc geninfo_all_blocks=1 00:12:37.139 --rc geninfo_unexecuted_blocks=1 00:12:37.139 00:12:37.139 ' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.139 --rc genhtml_branch_coverage=1 00:12:37.139 --rc genhtml_function_coverage=1 00:12:37.139 --rc genhtml_legend=1 00:12:37.139 --rc geninfo_all_blocks=1 00:12:37.139 --rc geninfo_unexecuted_blocks=1 00:12:37.139 00:12:37.139 ' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.139 --rc genhtml_branch_coverage=1 00:12:37.139 --rc genhtml_function_coverage=1 00:12:37.139 --rc genhtml_legend=1 00:12:37.139 --rc geninfo_all_blocks=1 00:12:37.139 --rc geninfo_unexecuted_blocks=1 00:12:37.139 00:12:37.139 ' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.139 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:43.706 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:43.706 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:43.706 Found net devices under 0000:86:00.0: cvl_0_0 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:43.706 Found net devices under 0000:86:00.1: cvl_0_1 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.706 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:43.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:12:43.707 00:12:43.707 --- 10.0.0.2 ping statistics --- 00:12:43.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.707 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:12:43.707 00:12:43.707 --- 10.0.0.1 ping statistics --- 00:12:43.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.707 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3828934 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3828934 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3828934 ']' 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.707 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:44.271 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:56.459 Initializing NVMe Controllers 00:12:56.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:56.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:56.459 Initialization complete. Launching workers. 00:12:56.459 ======================================================== 00:12:56.459 Latency(us) 00:12:56.459 Device Information : IOPS MiB/s Average min max 00:12:56.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18051.04 70.51 3544.86 663.63 16089.75 00:12:56.459 ======================================================== 00:12:56.459 Total : 18051.04 70.51 3544.86 663.63 16089.75 00:12:56.459 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.459 rmmod nvme_tcp 00:12:56.459 rmmod nvme_fabrics 00:12:56.459 rmmod nvme_keyring 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3828934 ']' 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3828934 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3828934 ']' 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3828934 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3828934 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:56.459 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3828934' 00:12:56.459 killing process with pid 3828934 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3828934 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3828934 00:12:56.460 nvmf threads initialize successfully 00:12:56.460 bdev subsystem init successfully 00:12:56.460 created a nvmf target service 00:12:56.460 create targets's poll groups done 00:12:56.460 all subsystems of target started 00:12:56.460 nvmf target is running 00:12:56.460 all subsystems of target stopped 00:12:56.460 destroy targets's poll groups done 00:12:56.460 destroyed the nvmf target service 00:12:56.460 bdev subsystem finish successfully 00:12:56.460 nvmf threads destroy successfully 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.460 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.719 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:56.719 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:56.719 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.719 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:56.719 00:12:56.719 real 0m19.824s 00:12:56.719 user 0m45.787s 00:12:56.719 sys 0m6.139s 00:12:56.719 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.719 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:56.719 ************************************ 00:12:56.719 END TEST nvmf_example 00:12:56.719 ************************************ 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:56.978 ************************************ 00:12:56.978 START TEST nvmf_filesystem 00:12:56.978 ************************************ 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:56.978 * Looking for test storage... 00:12:56.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.978 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.979 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:56.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.979 --rc genhtml_branch_coverage=1 00:12:56.979 --rc genhtml_function_coverage=1 00:12:56.979 --rc genhtml_legend=1 00:12:56.979 --rc geninfo_all_blocks=1 00:12:56.979 --rc geninfo_unexecuted_blocks=1 00:12:56.979 00:12:56.979 ' 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:57.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.243 --rc genhtml_branch_coverage=1 00:12:57.243 --rc genhtml_function_coverage=1 00:12:57.243 --rc genhtml_legend=1 00:12:57.243 --rc geninfo_all_blocks=1 00:12:57.243 --rc geninfo_unexecuted_blocks=1 00:12:57.243 00:12:57.243 ' 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:57.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.243 --rc genhtml_branch_coverage=1 00:12:57.243 --rc genhtml_function_coverage=1 00:12:57.243 --rc genhtml_legend=1 00:12:57.243 --rc geninfo_all_blocks=1 00:12:57.243 --rc geninfo_unexecuted_blocks=1 00:12:57.243 00:12:57.243 ' 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:57.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.243 --rc genhtml_branch_coverage=1 00:12:57.243 --rc genhtml_function_coverage=1 00:12:57.243 --rc genhtml_legend=1 00:12:57.243 --rc geninfo_all_blocks=1 00:12:57.243 --rc geninfo_unexecuted_blocks=1 00:12:57.243 00:12:57.243 ' 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:57.243 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:57.244 #define SPDK_CONFIG_H 00:12:57.244 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:57.244 #define SPDK_CONFIG_APPS 1 00:12:57.244 #define SPDK_CONFIG_ARCH native 00:12:57.244 #undef SPDK_CONFIG_ASAN 00:12:57.244 #undef SPDK_CONFIG_AVAHI 00:12:57.244 #undef SPDK_CONFIG_CET 00:12:57.244 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:57.244 #define SPDK_CONFIG_COVERAGE 1 00:12:57.244 #define SPDK_CONFIG_CROSS_PREFIX 00:12:57.244 #undef SPDK_CONFIG_CRYPTO 00:12:57.244 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:57.244 #undef SPDK_CONFIG_CUSTOMOCF 00:12:57.244 #undef SPDK_CONFIG_DAOS 00:12:57.244 #define SPDK_CONFIG_DAOS_DIR 00:12:57.244 #define SPDK_CONFIG_DEBUG 1 00:12:57.244 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:57.244 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:57.244 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:57.244 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:57.244 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:57.244 #undef SPDK_CONFIG_DPDK_UADK 00:12:57.244 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:57.244 #define SPDK_CONFIG_EXAMPLES 1 00:12:57.244 #undef SPDK_CONFIG_FC 00:12:57.244 #define SPDK_CONFIG_FC_PATH 00:12:57.244 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:57.244 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:57.244 #define SPDK_CONFIG_FSDEV 1 00:12:57.244 #undef SPDK_CONFIG_FUSE 00:12:57.244 #undef SPDK_CONFIG_FUZZER 00:12:57.244 #define SPDK_CONFIG_FUZZER_LIB 00:12:57.244 #undef SPDK_CONFIG_GOLANG 00:12:57.244 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:57.244 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:57.244 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:57.244 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:57.244 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:57.244 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:57.244 #undef SPDK_CONFIG_HAVE_LZ4 00:12:57.244 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:57.244 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:57.244 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:57.244 #define SPDK_CONFIG_IDXD 1 00:12:57.244 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:57.244 #undef SPDK_CONFIG_IPSEC_MB 00:12:57.244 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:57.244 #define SPDK_CONFIG_ISAL 1 00:12:57.244 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:57.244 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:57.244 #define SPDK_CONFIG_LIBDIR 00:12:57.244 #undef SPDK_CONFIG_LTO 00:12:57.244 #define SPDK_CONFIG_MAX_LCORES 128 00:12:57.244 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:57.244 #define SPDK_CONFIG_NVME_CUSE 1 00:12:57.244 #undef SPDK_CONFIG_OCF 00:12:57.244 #define SPDK_CONFIG_OCF_PATH 00:12:57.244 #define SPDK_CONFIG_OPENSSL_PATH 00:12:57.244 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:57.244 #define SPDK_CONFIG_PGO_DIR 00:12:57.244 #undef SPDK_CONFIG_PGO_USE 00:12:57.244 #define SPDK_CONFIG_PREFIX /usr/local 00:12:57.244 #undef SPDK_CONFIG_RAID5F 00:12:57.244 #undef SPDK_CONFIG_RBD 00:12:57.244 #define SPDK_CONFIG_RDMA 1 00:12:57.244 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:57.244 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:57.244 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:57.244 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:57.244 #define SPDK_CONFIG_SHARED 1 00:12:57.244 #undef SPDK_CONFIG_SMA 00:12:57.244 #define SPDK_CONFIG_TESTS 1 00:12:57.244 #undef SPDK_CONFIG_TSAN 00:12:57.244 #define SPDK_CONFIG_UBLK 1 00:12:57.244 #define SPDK_CONFIG_UBSAN 1 00:12:57.244 #undef SPDK_CONFIG_UNIT_TESTS 00:12:57.244 #undef SPDK_CONFIG_URING 00:12:57.244 #define SPDK_CONFIG_URING_PATH 00:12:57.244 #undef SPDK_CONFIG_URING_ZNS 00:12:57.244 #undef SPDK_CONFIG_USDT 00:12:57.244 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:57.244 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:57.244 #define SPDK_CONFIG_VFIO_USER 1 00:12:57.244 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:57.244 #define SPDK_CONFIG_VHOST 1 00:12:57.244 #define SPDK_CONFIG_VIRTIO 1 00:12:57.244 #undef SPDK_CONFIG_VTUNE 00:12:57.244 #define SPDK_CONFIG_VTUNE_DIR 00:12:57.244 #define SPDK_CONFIG_WERROR 1 00:12:57.244 #define SPDK_CONFIG_WPDK_DIR 00:12:57.244 #undef SPDK_CONFIG_XNVME 00:12:57.244 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.244 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:57.245 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:57.246 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3831234 ]] 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3831234 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.CngU07 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.CngU07/tests/target /tmp/spdk.CngU07 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189116710912 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963973632 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6847262720 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97970618368 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.247 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981435904 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981988864 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=552960 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:57.248 * Looking for test storage... 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189116710912 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9061855232 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:57.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.248 --rc genhtml_branch_coverage=1 00:12:57.248 --rc genhtml_function_coverage=1 00:12:57.248 --rc genhtml_legend=1 00:12:57.248 --rc geninfo_all_blocks=1 00:12:57.248 --rc geninfo_unexecuted_blocks=1 00:12:57.248 00:12:57.248 ' 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:57.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.248 --rc genhtml_branch_coverage=1 00:12:57.248 --rc genhtml_function_coverage=1 00:12:57.248 --rc genhtml_legend=1 00:12:57.248 --rc geninfo_all_blocks=1 00:12:57.248 --rc geninfo_unexecuted_blocks=1 00:12:57.248 00:12:57.248 ' 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:57.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.248 --rc genhtml_branch_coverage=1 00:12:57.248 --rc genhtml_function_coverage=1 00:12:57.248 --rc genhtml_legend=1 00:12:57.248 --rc geninfo_all_blocks=1 00:12:57.248 --rc geninfo_unexecuted_blocks=1 00:12:57.248 00:12:57.248 ' 00:12:57.248 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:57.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.248 --rc genhtml_branch_coverage=1 00:12:57.248 --rc genhtml_function_coverage=1 00:12:57.248 --rc genhtml_legend=1 00:12:57.248 --rc geninfo_all_blocks=1 00:12:57.248 --rc geninfo_unexecuted_blocks=1 00:12:57.248 00:12:57.248 ' 00:12:57.248 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.248 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:57.248 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.248 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.248 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.248 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.248 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.248 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.248 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.249 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:57.508 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:04.074 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:04.074 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:04.074 Found net devices under 0000:86:00.0: cvl_0_0 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:04.074 Found net devices under 0000:86:00.1: cvl_0_1 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.074 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:04.075 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:04.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:13:04.075 00:13:04.075 --- 10.0.0.2 ping statistics --- 00:13:04.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.075 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:13:04.075 00:13:04.075 --- 10.0.0.1 ping statistics --- 00:13:04.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.075 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:04.075 ************************************ 00:13:04.075 START TEST nvmf_filesystem_no_in_capsule 00:13:04.075 ************************************ 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3834383 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3834383 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3834383 ']' 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.075 [2024-11-19 10:40:53.182427] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:13:04.075 [2024-11-19 10:40:53.182472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.075 [2024-11-19 10:40:53.265440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.075 [2024-11-19 10:40:53.307085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.075 [2024-11-19 10:40:53.307121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.075 [2024-11-19 10:40:53.307128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.075 [2024-11-19 10:40:53.307134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.075 [2024-11-19 10:40:53.307139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.075 [2024-11-19 10:40:53.308697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.075 [2024-11-19 10:40:53.308786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.075 [2024-11-19 10:40:53.308803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.075 [2024-11-19 10:40:53.308809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.075 [2024-11-19 10:40:53.452657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.075 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.075 Malloc1 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.076 [2024-11-19 10:40:53.594771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:04.076 { 00:13:04.076 "name": "Malloc1", 00:13:04.076 "aliases": [ 00:13:04.076 "7522931d-0e62-4e1a-a848-614d60656787" 00:13:04.076 ], 00:13:04.076 "product_name": "Malloc disk", 00:13:04.076 "block_size": 512, 00:13:04.076 "num_blocks": 1048576, 00:13:04.076 "uuid": "7522931d-0e62-4e1a-a848-614d60656787", 00:13:04.076 "assigned_rate_limits": { 00:13:04.076 "rw_ios_per_sec": 0, 00:13:04.076 "rw_mbytes_per_sec": 0, 00:13:04.076 "r_mbytes_per_sec": 0, 00:13:04.076 "w_mbytes_per_sec": 0 00:13:04.076 }, 00:13:04.076 "claimed": true, 00:13:04.076 "claim_type": "exclusive_write", 00:13:04.076 "zoned": false, 00:13:04.076 "supported_io_types": { 00:13:04.076 "read": true, 00:13:04.076 "write": true, 00:13:04.076 "unmap": true, 00:13:04.076 "flush": true, 00:13:04.076 "reset": true, 00:13:04.076 "nvme_admin": false, 00:13:04.076 "nvme_io": false, 00:13:04.076 "nvme_io_md": false, 00:13:04.076 "write_zeroes": true, 00:13:04.076 "zcopy": true, 00:13:04.076 "get_zone_info": false, 00:13:04.076 "zone_management": false, 00:13:04.076 "zone_append": false, 00:13:04.076 "compare": false, 00:13:04.076 "compare_and_write": false, 00:13:04.076 "abort": true, 00:13:04.076 "seek_hole": false, 00:13:04.076 "seek_data": false, 00:13:04.076 "copy": true, 00:13:04.076 "nvme_iov_md": false 00:13:04.076 }, 00:13:04.076 "memory_domains": [ 00:13:04.076 { 00:13:04.076 "dma_device_id": "system", 00:13:04.076 "dma_device_type": 1 00:13:04.076 }, 00:13:04.076 { 00:13:04.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.076 "dma_device_type": 2 00:13:04.076 } 00:13:04.076 ], 00:13:04.076 "driver_specific": {} 00:13:04.076 } 00:13:04.076 ]' 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:04.076 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.448 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.448 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:05.448 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.448 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:05.448 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:07.343 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:07.906 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.837 ************************************ 00:13:08.837 START TEST filesystem_ext4 00:13:08.837 ************************************ 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:08.837 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:08.837 mke2fs 1.47.0 (5-Feb-2023) 00:13:08.837 Discarding device blocks: 0/522240 done 00:13:08.837 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:08.837 Filesystem UUID: 5dceea40-fe8a-4c3d-b3cf-a138e1d67c13 00:13:08.837 Superblock backups stored on blocks: 00:13:08.837 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:08.837 00:13:08.837 Allocating group tables: 0/64 done 00:13:08.837 Writing inode tables: 0/64 done 00:13:11.361 Creating journal (8192 blocks): done 00:13:11.361 Writing superblocks and filesystem accounting information: 0/64 done 00:13:11.361 00:13:11.361 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:11.361 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:17.913 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:17.914 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:17.914 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:17.914 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:17.914 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:17.914 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3834383 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:17.914 00:13:17.914 real 0m8.533s 00:13:17.914 user 0m0.027s 00:13:17.914 sys 0m0.073s 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:17.914 ************************************ 00:13:17.914 END TEST filesystem_ext4 00:13:17.914 ************************************ 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.914 ************************************ 00:13:17.914 START TEST filesystem_btrfs 00:13:17.914 ************************************ 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:17.914 btrfs-progs v6.8.1 00:13:17.914 See https://btrfs.readthedocs.io for more information. 00:13:17.914 00:13:17.914 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:17.914 NOTE: several default settings have changed in version 5.15, please make sure 00:13:17.914 this does not affect your deployments: 00:13:17.914 - DUP for metadata (-m dup) 00:13:17.914 - enabled no-holes (-O no-holes) 00:13:17.914 - enabled free-space-tree (-R free-space-tree) 00:13:17.914 00:13:17.914 Label: (null) 00:13:17.914 UUID: 94637ba3-cf71-44d7-829b-937b58d16c92 00:13:17.914 Node size: 16384 00:13:17.914 Sector size: 4096 (CPU page size: 4096) 00:13:17.914 Filesystem size: 510.00MiB 00:13:17.914 Block group profiles: 00:13:17.914 Data: single 8.00MiB 00:13:17.914 Metadata: DUP 32.00MiB 00:13:17.914 System: DUP 8.00MiB 00:13:17.914 SSD detected: yes 00:13:17.914 Zoned device: no 00:13:17.914 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:17.914 Checksum: crc32c 00:13:17.914 Number of devices: 1 00:13:17.914 Devices: 00:13:17.914 ID SIZE PATH 00:13:17.914 1 510.00MiB /dev/nvme0n1p1 00:13:17.914 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:17.914 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:18.172 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:18.172 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:18.172 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:18.172 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:18.172 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:18.172 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:18.172 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3834383 00:13:18.172 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:18.172 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:18.173 00:13:18.173 real 0m0.782s 00:13:18.173 user 0m0.024s 00:13:18.173 sys 0m0.119s 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:18.173 ************************************ 00:13:18.173 END TEST filesystem_btrfs 00:13:18.173 ************************************ 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:18.173 ************************************ 00:13:18.173 START TEST filesystem_xfs 00:13:18.173 ************************************ 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:18.173 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:18.430 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:18.430 = sectsz=512 attr=2, projid32bit=1 00:13:18.430 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:18.430 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:18.430 data = bsize=4096 blocks=130560, imaxpct=25 00:13:18.430 = sunit=0 swidth=0 blks 00:13:18.430 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:18.430 log =internal log bsize=4096 blocks=16384, version=2 00:13:18.430 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:18.430 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:19.361 Discarding blocks...Done. 00:13:19.361 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:19.361 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3834383 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:21.885 00:13:21.885 real 0m3.683s 00:13:21.885 user 0m0.028s 00:13:21.885 sys 0m0.073s 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:21.885 ************************************ 00:13:21.885 END TEST filesystem_xfs 00:13:21.885 ************************************ 00:13:21.885 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:22.142 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:22.142 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3834383 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3834383 ']' 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3834383 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3834383 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3834383' 00:13:22.143 killing process with pid 3834383 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3834383 00:13:22.143 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3834383 00:13:22.400 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:22.400 00:13:22.400 real 0m19.060s 00:13:22.400 user 1m15.054s 00:13:22.400 sys 0m1.453s 00:13:22.400 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.400 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.401 ************************************ 00:13:22.401 END TEST nvmf_filesystem_no_in_capsule 00:13:22.401 ************************************ 00:13:22.659 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:22.659 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.659 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.659 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:22.659 ************************************ 00:13:22.659 START TEST nvmf_filesystem_in_capsule 00:13:22.659 ************************************ 00:13:22.659 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:22.659 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:22.659 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:22.659 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:22.659 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.659 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.660 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3837803 00:13:22.660 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3837803 00:13:22.660 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.660 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3837803 ']' 00:13:22.660 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.660 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.660 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.660 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.660 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.660 [2024-11-19 10:41:12.316935] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:13:22.660 [2024-11-19 10:41:12.316978] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.660 [2024-11-19 10:41:12.397283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.660 [2024-11-19 10:41:12.436858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.660 [2024-11-19 10:41:12.436898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.660 [2024-11-19 10:41:12.436905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.660 [2024-11-19 10:41:12.436911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.660 [2024-11-19 10:41:12.436916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.660 [2024-11-19 10:41:12.438516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.660 [2024-11-19 10:41:12.438623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.660 [2024-11-19 10:41:12.438713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.660 [2024-11-19 10:41:12.438711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.918 [2024-11-19 10:41:12.583644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.918 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.176 Malloc1 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.176 [2024-11-19 10:41:12.733131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:23.176 { 00:13:23.176 "name": "Malloc1", 00:13:23.176 "aliases": [ 00:13:23.176 "eb051dfa-c87e-4a1f-8c08-398b734b57e6" 00:13:23.176 ], 00:13:23.176 "product_name": "Malloc disk", 00:13:23.176 "block_size": 512, 00:13:23.176 "num_blocks": 1048576, 00:13:23.176 "uuid": "eb051dfa-c87e-4a1f-8c08-398b734b57e6", 00:13:23.176 "assigned_rate_limits": { 00:13:23.176 "rw_ios_per_sec": 0, 00:13:23.176 "rw_mbytes_per_sec": 0, 00:13:23.176 "r_mbytes_per_sec": 0, 00:13:23.176 "w_mbytes_per_sec": 0 00:13:23.176 }, 00:13:23.176 "claimed": true, 00:13:23.176 "claim_type": "exclusive_write", 00:13:23.176 "zoned": false, 00:13:23.176 "supported_io_types": { 00:13:23.176 "read": true, 00:13:23.176 "write": true, 00:13:23.176 "unmap": true, 00:13:23.176 "flush": true, 00:13:23.176 "reset": true, 00:13:23.176 "nvme_admin": false, 00:13:23.176 "nvme_io": false, 00:13:23.176 "nvme_io_md": false, 00:13:23.176 "write_zeroes": true, 00:13:23.176 "zcopy": true, 00:13:23.176 "get_zone_info": false, 00:13:23.176 "zone_management": false, 00:13:23.176 "zone_append": false, 00:13:23.176 "compare": false, 00:13:23.176 "compare_and_write": false, 00:13:23.176 "abort": true, 00:13:23.176 "seek_hole": false, 00:13:23.176 "seek_data": false, 00:13:23.176 "copy": true, 00:13:23.176 "nvme_iov_md": false 00:13:23.176 }, 00:13:23.176 "memory_domains": [ 00:13:23.176 { 00:13:23.176 "dma_device_id": "system", 00:13:23.176 "dma_device_type": 1 00:13:23.176 }, 00:13:23.176 { 00:13:23.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.176 "dma_device_type": 2 00:13:23.176 } 00:13:23.176 ], 00:13:23.176 "driver_specific": {} 00:13:23.176 } 00:13:23.176 ]' 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:23.176 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:23.177 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.547 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.547 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:24.547 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.547 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:24.547 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:26.443 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:26.700 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:26.957 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.888 ************************************ 00:13:27.888 START TEST filesystem_in_capsule_ext4 00:13:27.888 ************************************ 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:27.888 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:27.888 mke2fs 1.47.0 (5-Feb-2023) 00:13:27.888 Discarding device blocks: 0/522240 done 00:13:27.888 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:27.888 Filesystem UUID: 0e211466-b8ad-4d49-87f0-7781d09c8d7f 00:13:27.888 Superblock backups stored on blocks: 00:13:27.888 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:27.888 00:13:27.888 Allocating group tables: 0/64 done 00:13:27.888 Writing inode tables: 0/64 done 00:13:28.146 Creating journal (8192 blocks): done 00:13:29.590 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:13:29.590 00:13:29.590 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:29.590 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:36.141 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:36.141 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:36.141 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:36.141 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:36.141 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3837803 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:36.142 00:13:36.142 real 0m7.176s 00:13:36.142 user 0m0.024s 00:13:36.142 sys 0m0.076s 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:36.142 ************************************ 00:13:36.142 END TEST filesystem_in_capsule_ext4 00:13:36.142 ************************************ 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.142 ************************************ 00:13:36.142 START TEST filesystem_in_capsule_btrfs 00:13:36.142 ************************************ 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:36.142 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:36.142 btrfs-progs v6.8.1 00:13:36.142 See https://btrfs.readthedocs.io for more information. 00:13:36.142 00:13:36.142 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:36.142 NOTE: several default settings have changed in version 5.15, please make sure 00:13:36.142 this does not affect your deployments: 00:13:36.142 - DUP for metadata (-m dup) 00:13:36.142 - enabled no-holes (-O no-holes) 00:13:36.142 - enabled free-space-tree (-R free-space-tree) 00:13:36.142 00:13:36.142 Label: (null) 00:13:36.142 UUID: 4cd869d8-9d15-4f81-acc7-aaad3b556240 00:13:36.142 Node size: 16384 00:13:36.142 Sector size: 4096 (CPU page size: 4096) 00:13:36.142 Filesystem size: 510.00MiB 00:13:36.142 Block group profiles: 00:13:36.142 Data: single 8.00MiB 00:13:36.142 Metadata: DUP 32.00MiB 00:13:36.142 System: DUP 8.00MiB 00:13:36.142 SSD detected: yes 00:13:36.142 Zoned device: no 00:13:36.142 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:36.142 Checksum: crc32c 00:13:36.142 Number of devices: 1 00:13:36.142 Devices: 00:13:36.142 ID SIZE PATH 00:13:36.142 1 510.00MiB /dev/nvme0n1p1 00:13:36.142 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3837803 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:36.142 00:13:36.142 real 0m0.538s 00:13:36.142 user 0m0.023s 00:13:36.142 sys 0m0.116s 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:36.142 ************************************ 00:13:36.142 END TEST filesystem_in_capsule_btrfs 00:13:36.142 ************************************ 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.142 ************************************ 00:13:36.142 START TEST filesystem_in_capsule_xfs 00:13:36.142 ************************************ 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:36.142 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:36.142 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:36.142 = sectsz=512 attr=2, projid32bit=1 00:13:36.142 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:36.142 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:36.142 data = bsize=4096 blocks=130560, imaxpct=25 00:13:36.142 = sunit=0 swidth=0 blks 00:13:36.142 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:36.142 log =internal log bsize=4096 blocks=16384, version=2 00:13:36.142 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:36.142 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:37.075 Discarding blocks...Done. 00:13:37.075 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:37.075 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:39.602 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3837803 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:39.602 00:13:39.602 real 0m3.695s 00:13:39.602 user 0m0.031s 00:13:39.602 sys 0m0.068s 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:39.602 ************************************ 00:13:39.602 END TEST filesystem_in_capsule_xfs 00:13:39.602 ************************************ 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:39.602 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3837803 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3837803 ']' 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3837803 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3837803 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3837803' 00:13:39.861 killing process with pid 3837803 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3837803 00:13:39.861 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3837803 00:13:40.119 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:40.119 00:13:40.119 real 0m17.529s 00:13:40.119 user 1m8.996s 00:13:40.119 sys 0m1.403s 00:13:40.119 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.119 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.119 ************************************ 00:13:40.119 END TEST nvmf_filesystem_in_capsule 00:13:40.119 ************************************ 00:13:40.119 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:40.119 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:40.119 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:40.120 rmmod nvme_tcp 00:13:40.120 rmmod nvme_fabrics 00:13:40.120 rmmod nvme_keyring 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.120 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.656 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:42.656 00:13:42.656 real 0m45.381s 00:13:42.656 user 2m26.066s 00:13:42.656 sys 0m7.671s 00:13:42.656 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.656 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:42.656 ************************************ 00:13:42.656 END TEST nvmf_filesystem 00:13:42.656 ************************************ 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.656 ************************************ 00:13:42.656 START TEST nvmf_target_discovery 00:13:42.656 ************************************ 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:42.656 * Looking for test storage... 00:13:42.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:42.656 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:42.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.657 --rc genhtml_branch_coverage=1 00:13:42.657 --rc genhtml_function_coverage=1 00:13:42.657 --rc genhtml_legend=1 00:13:42.657 --rc geninfo_all_blocks=1 00:13:42.657 --rc geninfo_unexecuted_blocks=1 00:13:42.657 00:13:42.657 ' 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:42.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.657 --rc genhtml_branch_coverage=1 00:13:42.657 --rc genhtml_function_coverage=1 00:13:42.657 --rc genhtml_legend=1 00:13:42.657 --rc geninfo_all_blocks=1 00:13:42.657 --rc geninfo_unexecuted_blocks=1 00:13:42.657 00:13:42.657 ' 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:42.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.657 --rc genhtml_branch_coverage=1 00:13:42.657 --rc genhtml_function_coverage=1 00:13:42.657 --rc genhtml_legend=1 00:13:42.657 --rc geninfo_all_blocks=1 00:13:42.657 --rc geninfo_unexecuted_blocks=1 00:13:42.657 00:13:42.657 ' 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:42.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.657 --rc genhtml_branch_coverage=1 00:13:42.657 --rc genhtml_function_coverage=1 00:13:42.657 --rc genhtml_legend=1 00:13:42.657 --rc geninfo_all_blocks=1 00:13:42.657 --rc geninfo_unexecuted_blocks=1 00:13:42.657 00:13:42.657 ' 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:42.657 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:49.382 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:49.382 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:49.382 Found net devices under 0000:86:00.0: cvl_0_0 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:49.382 Found net devices under 0000:86:00.1: cvl_0_1 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:49.382 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:49.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:49.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:49.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:49.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:49.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:13:49.383 00:13:49.383 --- 10.0.0.2 ping statistics --- 00:13:49.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.383 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:13:49.383 00:13:49.383 --- 10.0.0.1 ping statistics --- 00:13:49.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.383 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3844354 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3844354 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3844354 ']' 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.383 [2024-11-19 10:41:38.317677] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:13:49.383 [2024-11-19 10:41:38.317719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.383 [2024-11-19 10:41:38.379518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.383 [2024-11-19 10:41:38.422496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.383 [2024-11-19 10:41:38.422534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.383 [2024-11-19 10:41:38.422541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.383 [2024-11-19 10:41:38.422547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.383 [2024-11-19 10:41:38.422552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.383 [2024-11-19 10:41:38.424109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.383 [2024-11-19 10:41:38.424151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.383 [2024-11-19 10:41:38.424270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.383 [2024-11-19 10:41:38.424271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.383 [2024-11-19 10:41:38.563126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.383 Null1 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.383 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 [2024-11-19 10:41:38.608312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 Null2 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 Null3 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 Null4 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:49.384 00:13:49.384 Discovery Log Number of Records 6, Generation counter 6 00:13:49.384 =====Discovery Log Entry 0====== 00:13:49.384 trtype: tcp 00:13:49.384 adrfam: ipv4 00:13:49.384 subtype: current discovery subsystem 00:13:49.384 treq: not required 00:13:49.384 portid: 0 00:13:49.384 trsvcid: 4420 00:13:49.384 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:49.384 traddr: 10.0.0.2 00:13:49.384 eflags: explicit discovery connections, duplicate discovery information 00:13:49.384 sectype: none 00:13:49.384 =====Discovery Log Entry 1====== 00:13:49.384 trtype: tcp 00:13:49.384 adrfam: ipv4 00:13:49.384 subtype: nvme subsystem 00:13:49.384 treq: not required 00:13:49.384 portid: 0 00:13:49.384 trsvcid: 4420 00:13:49.384 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:49.384 traddr: 10.0.0.2 00:13:49.384 eflags: none 00:13:49.384 sectype: none 00:13:49.384 =====Discovery Log Entry 2====== 00:13:49.384 trtype: tcp 00:13:49.384 adrfam: ipv4 00:13:49.384 subtype: nvme subsystem 00:13:49.384 treq: not required 00:13:49.384 portid: 0 00:13:49.384 trsvcid: 4420 00:13:49.384 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:49.384 traddr: 10.0.0.2 00:13:49.384 eflags: none 00:13:49.384 sectype: none 00:13:49.384 =====Discovery Log Entry 3====== 00:13:49.384 trtype: tcp 00:13:49.384 adrfam: ipv4 00:13:49.384 subtype: nvme subsystem 00:13:49.384 treq: not required 00:13:49.384 portid: 0 00:13:49.384 trsvcid: 4420 00:13:49.384 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:49.384 traddr: 10.0.0.2 00:13:49.384 eflags: none 00:13:49.384 sectype: none 00:13:49.384 =====Discovery Log Entry 4====== 00:13:49.384 trtype: tcp 00:13:49.384 adrfam: ipv4 00:13:49.384 subtype: nvme subsystem 00:13:49.384 treq: not required 00:13:49.384 portid: 0 00:13:49.384 trsvcid: 4420 00:13:49.384 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:49.384 traddr: 10.0.0.2 00:13:49.384 eflags: none 00:13:49.384 sectype: none 00:13:49.384 =====Discovery Log Entry 5====== 00:13:49.384 trtype: tcp 00:13:49.384 adrfam: ipv4 00:13:49.384 subtype: discovery subsystem referral 00:13:49.384 treq: not required 00:13:49.384 portid: 0 00:13:49.384 trsvcid: 4430 00:13:49.384 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:49.384 traddr: 10.0.0.2 00:13:49.384 eflags: none 00:13:49.384 sectype: none 00:13:49.384 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:49.385 Perform nvmf subsystem discovery via RPC 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 [ 00:13:49.385 { 00:13:49.385 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:49.385 "subtype": "Discovery", 00:13:49.385 "listen_addresses": [ 00:13:49.385 { 00:13:49.385 "trtype": "TCP", 00:13:49.385 "adrfam": "IPv4", 00:13:49.385 "traddr": "10.0.0.2", 00:13:49.385 "trsvcid": "4420" 00:13:49.385 } 00:13:49.385 ], 00:13:49.385 "allow_any_host": true, 00:13:49.385 "hosts": [] 00:13:49.385 }, 00:13:49.385 { 00:13:49.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.385 "subtype": "NVMe", 00:13:49.385 "listen_addresses": [ 00:13:49.385 { 00:13:49.385 "trtype": "TCP", 00:13:49.385 "adrfam": "IPv4", 00:13:49.385 "traddr": "10.0.0.2", 00:13:49.385 "trsvcid": "4420" 00:13:49.385 } 00:13:49.385 ], 00:13:49.385 "allow_any_host": true, 00:13:49.385 "hosts": [], 00:13:49.385 "serial_number": "SPDK00000000000001", 00:13:49.385 "model_number": "SPDK bdev Controller", 00:13:49.385 "max_namespaces": 32, 00:13:49.385 "min_cntlid": 1, 00:13:49.385 "max_cntlid": 65519, 00:13:49.385 "namespaces": [ 00:13:49.385 { 00:13:49.385 "nsid": 1, 00:13:49.385 "bdev_name": "Null1", 00:13:49.385 "name": "Null1", 00:13:49.385 "nguid": "927C997F18954F34996C1DDE10A39B38", 00:13:49.385 "uuid": "927c997f-1895-4f34-996c-1dde10a39b38" 00:13:49.385 } 00:13:49.385 ] 00:13:49.385 }, 00:13:49.385 { 00:13:49.385 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:49.385 "subtype": "NVMe", 00:13:49.385 "listen_addresses": [ 00:13:49.385 { 00:13:49.385 "trtype": "TCP", 00:13:49.385 "adrfam": "IPv4", 00:13:49.385 "traddr": "10.0.0.2", 00:13:49.385 "trsvcid": "4420" 00:13:49.385 } 00:13:49.385 ], 00:13:49.385 "allow_any_host": true, 00:13:49.385 "hosts": [], 00:13:49.385 "serial_number": "SPDK00000000000002", 00:13:49.385 "model_number": "SPDK bdev Controller", 00:13:49.385 "max_namespaces": 32, 00:13:49.385 "min_cntlid": 1, 00:13:49.385 "max_cntlid": 65519, 00:13:49.385 "namespaces": [ 00:13:49.385 { 00:13:49.385 "nsid": 1, 00:13:49.385 "bdev_name": "Null2", 00:13:49.385 "name": "Null2", 00:13:49.385 "nguid": "6DB7F85854CD4BAD84D2EE14042E5814", 00:13:49.385 "uuid": "6db7f858-54cd-4bad-84d2-ee14042e5814" 00:13:49.385 } 00:13:49.385 ] 00:13:49.385 }, 00:13:49.385 { 00:13:49.385 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:49.385 "subtype": "NVMe", 00:13:49.385 "listen_addresses": [ 00:13:49.385 { 00:13:49.385 "trtype": "TCP", 00:13:49.385 "adrfam": "IPv4", 00:13:49.385 "traddr": "10.0.0.2", 00:13:49.385 "trsvcid": "4420" 00:13:49.385 } 00:13:49.385 ], 00:13:49.385 "allow_any_host": true, 00:13:49.385 "hosts": [], 00:13:49.385 "serial_number": "SPDK00000000000003", 00:13:49.385 "model_number": "SPDK bdev Controller", 00:13:49.385 "max_namespaces": 32, 00:13:49.385 "min_cntlid": 1, 00:13:49.385 "max_cntlid": 65519, 00:13:49.385 "namespaces": [ 00:13:49.385 { 00:13:49.385 "nsid": 1, 00:13:49.385 "bdev_name": "Null3", 00:13:49.385 "name": "Null3", 00:13:49.385 "nguid": "2720C32A818C497E83E57BF3F35CC508", 00:13:49.385 "uuid": "2720c32a-818c-497e-83e5-7bf3f35cc508" 00:13:49.385 } 00:13:49.385 ] 00:13:49.385 }, 00:13:49.385 { 00:13:49.385 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:49.385 "subtype": "NVMe", 00:13:49.385 "listen_addresses": [ 00:13:49.385 { 00:13:49.385 "trtype": "TCP", 00:13:49.385 "adrfam": "IPv4", 00:13:49.385 "traddr": "10.0.0.2", 00:13:49.385 "trsvcid": "4420" 00:13:49.385 } 00:13:49.385 ], 00:13:49.385 "allow_any_host": true, 00:13:49.385 "hosts": [], 00:13:49.385 "serial_number": "SPDK00000000000004", 00:13:49.385 "model_number": "SPDK bdev Controller", 00:13:49.385 "max_namespaces": 32, 00:13:49.385 "min_cntlid": 1, 00:13:49.385 "max_cntlid": 65519, 00:13:49.385 "namespaces": [ 00:13:49.385 { 00:13:49.385 "nsid": 1, 00:13:49.385 "bdev_name": "Null4", 00:13:49.385 "name": "Null4", 00:13:49.385 "nguid": "9815E5C71DCA41789CC04A5D6BDAA296", 00:13:49.385 "uuid": "9815e5c7-1dca-4178-9cc0-4a5d6bdaa296" 00:13:49.385 } 00:13:49.385 ] 00:13:49.385 } 00:13:49.385 ] 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.385 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.385 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:49.385 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.385 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.386 rmmod nvme_tcp 00:13:49.386 rmmod nvme_fabrics 00:13:49.386 rmmod nvme_keyring 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3844354 ']' 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3844354 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3844354 ']' 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3844354 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.386 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3844354 00:13:49.644 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.644 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.644 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3844354' 00:13:49.644 killing process with pid 3844354 00:13:49.644 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3844354 00:13:49.644 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3844354 00:13:49.644 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:49.644 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:49.644 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:49.644 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:49.645 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:49.645 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:49.645 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:49.645 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:49.645 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:49.645 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.645 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.645 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.180 00:13:52.180 real 0m9.363s 00:13:52.180 user 0m5.535s 00:13:52.180 sys 0m4.847s 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.180 ************************************ 00:13:52.180 END TEST nvmf_target_discovery 00:13:52.180 ************************************ 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.180 ************************************ 00:13:52.180 START TEST nvmf_referrals 00:13:52.180 ************************************ 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:52.180 * Looking for test storage... 00:13:52.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:52.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.180 --rc genhtml_branch_coverage=1 00:13:52.180 --rc genhtml_function_coverage=1 00:13:52.180 --rc genhtml_legend=1 00:13:52.180 --rc geninfo_all_blocks=1 00:13:52.180 --rc geninfo_unexecuted_blocks=1 00:13:52.180 00:13:52.180 ' 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:52.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.180 --rc genhtml_branch_coverage=1 00:13:52.180 --rc genhtml_function_coverage=1 00:13:52.180 --rc genhtml_legend=1 00:13:52.180 --rc geninfo_all_blocks=1 00:13:52.180 --rc geninfo_unexecuted_blocks=1 00:13:52.180 00:13:52.180 ' 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:52.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.180 --rc genhtml_branch_coverage=1 00:13:52.180 --rc genhtml_function_coverage=1 00:13:52.180 --rc genhtml_legend=1 00:13:52.180 --rc geninfo_all_blocks=1 00:13:52.180 --rc geninfo_unexecuted_blocks=1 00:13:52.180 00:13:52.180 ' 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:52.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.180 --rc genhtml_branch_coverage=1 00:13:52.180 --rc genhtml_function_coverage=1 00:13:52.180 --rc genhtml_legend=1 00:13:52.180 --rc geninfo_all_blocks=1 00:13:52.180 --rc geninfo_unexecuted_blocks=1 00:13:52.180 00:13:52.180 ' 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.180 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.181 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:58.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:58.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:58.749 Found net devices under 0000:86:00.0: cvl_0_0 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.749 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:58.750 Found net devices under 0000:86:00.1: cvl_0_1 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:58.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:13:58.750 00:13:58.750 --- 10.0.0.2 ping statistics --- 00:13:58.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.750 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:13:58.750 00:13:58.750 --- 10.0.0.1 ping statistics --- 00:13:58.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.750 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3848027 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3848027 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3848027 ']' 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 [2024-11-19 10:41:47.760896] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:13:58.750 [2024-11-19 10:41:47.760947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.750 [2024-11-19 10:41:47.839823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.750 [2024-11-19 10:41:47.880108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.750 [2024-11-19 10:41:47.880149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.750 [2024-11-19 10:41:47.880156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.750 [2024-11-19 10:41:47.880162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.750 [2024-11-19 10:41:47.880167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.750 [2024-11-19 10:41:47.881726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.750 [2024-11-19 10:41:47.881836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.750 [2024-11-19 10:41:47.881949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.750 [2024-11-19 10:41:47.881950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:58.750 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 [2024-11-19 10:41:48.030813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 [2024-11-19 10:41:48.044110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.750 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:58.751 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:59.008 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:59.264 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:59.264 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:59.264 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:59.264 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:59.264 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:59.264 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:59.264 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:59.520 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:59.521 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:59.778 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:59.778 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:59.778 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:59.778 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:59.778 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:59.778 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:59.778 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:59.778 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:59.778 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:59.778 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:00.034 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:00.034 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:00.034 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:00.034 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:00.034 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:00.034 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:00.290 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:00.547 rmmod nvme_tcp 00:14:00.547 rmmod nvme_fabrics 00:14:00.547 rmmod nvme_keyring 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3848027 ']' 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3848027 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3848027 ']' 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3848027 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3848027 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3848027' 00:14:00.547 killing process with pid 3848027 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3848027 00:14:00.547 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3848027 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.805 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.711 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:02.711 00:14:02.711 real 0m11.021s 00:14:02.711 user 0m12.656s 00:14:02.711 sys 0m5.349s 00:14:02.711 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.711 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:02.711 ************************************ 00:14:02.711 END TEST nvmf_referrals 00:14:02.711 ************************************ 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.971 ************************************ 00:14:02.971 START TEST nvmf_connect_disconnect 00:14:02.971 ************************************ 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:02.971 * Looking for test storage... 00:14:02.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:02.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.971 --rc genhtml_branch_coverage=1 00:14:02.971 --rc genhtml_function_coverage=1 00:14:02.971 --rc genhtml_legend=1 00:14:02.971 --rc geninfo_all_blocks=1 00:14:02.971 --rc geninfo_unexecuted_blocks=1 00:14:02.971 00:14:02.971 ' 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:02.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.971 --rc genhtml_branch_coverage=1 00:14:02.971 --rc genhtml_function_coverage=1 00:14:02.971 --rc genhtml_legend=1 00:14:02.971 --rc geninfo_all_blocks=1 00:14:02.971 --rc geninfo_unexecuted_blocks=1 00:14:02.971 00:14:02.971 ' 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:02.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.971 --rc genhtml_branch_coverage=1 00:14:02.971 --rc genhtml_function_coverage=1 00:14:02.971 --rc genhtml_legend=1 00:14:02.971 --rc geninfo_all_blocks=1 00:14:02.971 --rc geninfo_unexecuted_blocks=1 00:14:02.971 00:14:02.971 ' 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:02.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.971 --rc genhtml_branch_coverage=1 00:14:02.971 --rc genhtml_function_coverage=1 00:14:02.971 --rc genhtml_legend=1 00:14:02.971 --rc geninfo_all_blocks=1 00:14:02.971 --rc geninfo_unexecuted_blocks=1 00:14:02.971 00:14:02.971 ' 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.971 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:03.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:14:03.231 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:09.803 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.803 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:09.804 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:09.804 Found net devices under 0000:86:00.0: cvl_0_0 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:09.804 Found net devices under 0000:86:00.1: cvl_0_1 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:09.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:14:09.804 00:14:09.804 --- 10.0.0.2 ping statistics --- 00:14:09.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.804 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:14:09.804 00:14:09.804 --- 10.0.0.1 ping statistics --- 00:14:09.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.804 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3852003 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3852003 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3852003 ']' 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.804 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:09.805 [2024-11-19 10:41:58.825635] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:14:09.805 [2024-11-19 10:41:58.825686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.805 [2024-11-19 10:41:58.905628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.805 [2024-11-19 10:41:58.950751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.805 [2024-11-19 10:41:58.950790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.805 [2024-11-19 10:41:58.950798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.805 [2024-11-19 10:41:58.950805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.805 [2024-11-19 10:41:58.950811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.805 [2024-11-19 10:41:58.952475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.805 [2024-11-19 10:41:58.953092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.805 [2024-11-19 10:41:58.953134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.805 [2024-11-19 10:41:58.953133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.062 [2024-11-19 10:41:59.699732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:10.062 [2024-11-19 10:41:59.776800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:10.062 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:13.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.434 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:26.434 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:26.434 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:26.434 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:26.434 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:26.434 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:26.434 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:26.434 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:26.434 rmmod nvme_tcp 00:14:26.434 rmmod nvme_fabrics 00:14:26.434 rmmod nvme_keyring 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3852003 ']' 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3852003 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3852003 ']' 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3852003 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852003 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852003' 00:14:26.434 killing process with pid 3852003 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3852003 00:14:26.434 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3852003 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.693 10:42:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.597 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:28.597 00:14:28.597 real 0m25.767s 00:14:28.597 user 1m10.594s 00:14:28.597 sys 0m5.852s 00:14:28.597 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.597 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:28.597 ************************************ 00:14:28.597 END TEST nvmf_connect_disconnect 00:14:28.597 ************************************ 00:14:28.597 10:42:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:28.597 10:42:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:28.597 10:42:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.597 10:42:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:28.856 ************************************ 00:14:28.856 START TEST nvmf_multitarget 00:14:28.856 ************************************ 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:28.856 * Looking for test storage... 00:14:28.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.856 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:28.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.857 --rc genhtml_branch_coverage=1 00:14:28.857 --rc genhtml_function_coverage=1 00:14:28.857 --rc genhtml_legend=1 00:14:28.857 --rc geninfo_all_blocks=1 00:14:28.857 --rc geninfo_unexecuted_blocks=1 00:14:28.857 00:14:28.857 ' 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:28.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.857 --rc genhtml_branch_coverage=1 00:14:28.857 --rc genhtml_function_coverage=1 00:14:28.857 --rc genhtml_legend=1 00:14:28.857 --rc geninfo_all_blocks=1 00:14:28.857 --rc geninfo_unexecuted_blocks=1 00:14:28.857 00:14:28.857 ' 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:28.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.857 --rc genhtml_branch_coverage=1 00:14:28.857 --rc genhtml_function_coverage=1 00:14:28.857 --rc genhtml_legend=1 00:14:28.857 --rc geninfo_all_blocks=1 00:14:28.857 --rc geninfo_unexecuted_blocks=1 00:14:28.857 00:14:28.857 ' 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:28.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.857 --rc genhtml_branch_coverage=1 00:14:28.857 --rc genhtml_function_coverage=1 00:14:28.857 --rc genhtml_legend=1 00:14:28.857 --rc geninfo_all_blocks=1 00:14:28.857 --rc geninfo_unexecuted_blocks=1 00:14:28.857 00:14:28.857 ' 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:28.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:28.857 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:35.426 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:35.426 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:35.426 Found net devices under 0000:86:00.0: cvl_0_0 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.426 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:35.427 Found net devices under 0000:86:00.1: cvl_0_1 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:35.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:14:35.427 00:14:35.427 --- 10.0.0.2 ping statistics --- 00:14:35.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.427 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:14:35.427 00:14:35.427 --- 10.0.0.1 ping statistics --- 00:14:35.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.427 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3859029 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3859029 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3859029 ']' 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:35.427 [2024-11-19 10:42:24.661609] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:14:35.427 [2024-11-19 10:42:24.661658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.427 [2024-11-19 10:42:24.739538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.427 [2024-11-19 10:42:24.782343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.427 [2024-11-19 10:42:24.782381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.427 [2024-11-19 10:42:24.782389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.427 [2024-11-19 10:42:24.782396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.427 [2024-11-19 10:42:24.782401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.427 [2024-11-19 10:42:24.783996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.427 [2024-11-19 10:42:24.784104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.427 [2024-11-19 10:42:24.784233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.427 [2024-11-19 10:42:24.784234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:35.427 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:35.427 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:35.427 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:35.427 "nvmf_tgt_1" 00:14:35.427 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:35.684 "nvmf_tgt_2" 00:14:35.684 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:35.684 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:35.684 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:35.684 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:35.684 true 00:14:35.684 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:35.941 true 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:35.941 rmmod nvme_tcp 00:14:35.941 rmmod nvme_fabrics 00:14:35.941 rmmod nvme_keyring 00:14:35.941 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3859029 ']' 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3859029 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3859029 ']' 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3859029 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3859029 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3859029' 00:14:36.200 killing process with pid 3859029 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3859029 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3859029 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:36.200 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:36.201 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:36.201 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:36.201 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.201 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.201 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:38.736 00:14:38.736 real 0m9.618s 00:14:38.736 user 0m7.132s 00:14:38.736 sys 0m4.933s 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:38.736 ************************************ 00:14:38.736 END TEST nvmf_multitarget 00:14:38.736 ************************************ 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.736 ************************************ 00:14:38.736 START TEST nvmf_rpc 00:14:38.736 ************************************ 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:38.736 * Looking for test storage... 00:14:38.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:38.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.736 --rc genhtml_branch_coverage=1 00:14:38.736 --rc genhtml_function_coverage=1 00:14:38.736 --rc genhtml_legend=1 00:14:38.736 --rc geninfo_all_blocks=1 00:14:38.736 --rc geninfo_unexecuted_blocks=1 00:14:38.736 00:14:38.736 ' 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:38.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.736 --rc genhtml_branch_coverage=1 00:14:38.736 --rc genhtml_function_coverage=1 00:14:38.736 --rc genhtml_legend=1 00:14:38.736 --rc geninfo_all_blocks=1 00:14:38.736 --rc geninfo_unexecuted_blocks=1 00:14:38.736 00:14:38.736 ' 00:14:38.736 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:38.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.736 --rc genhtml_branch_coverage=1 00:14:38.736 --rc genhtml_function_coverage=1 00:14:38.736 --rc genhtml_legend=1 00:14:38.736 --rc geninfo_all_blocks=1 00:14:38.736 --rc geninfo_unexecuted_blocks=1 00:14:38.736 00:14:38.736 ' 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.737 --rc genhtml_branch_coverage=1 00:14:38.737 --rc genhtml_function_coverage=1 00:14:38.737 --rc genhtml_legend=1 00:14:38.737 --rc geninfo_all_blocks=1 00:14:38.737 --rc geninfo_unexecuted_blocks=1 00:14:38.737 00:14:38.737 ' 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:38.737 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:45.304 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:45.304 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:45.304 Found net devices under 0000:86:00.0: cvl_0_0 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:45.304 Found net devices under 0000:86:00.1: cvl_0_1 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.304 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:45.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:14:45.305 00:14:45.305 --- 10.0.0.2 ping statistics --- 00:14:45.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.305 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:14:45.305 00:14:45.305 --- 10.0.0.1 ping statistics --- 00:14:45.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.305 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3862699 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3862699 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3862699 ']' 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.305 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 [2024-11-19 10:42:34.373056] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:14:45.305 [2024-11-19 10:42:34.373104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.305 [2024-11-19 10:42:34.452746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.305 [2024-11-19 10:42:34.495695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.305 [2024-11-19 10:42:34.495728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.305 [2024-11-19 10:42:34.495735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.305 [2024-11-19 10:42:34.495741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.305 [2024-11-19 10:42:34.495746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.305 [2024-11-19 10:42:34.497365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.305 [2024-11-19 10:42:34.497499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.305 [2024-11-19 10:42:34.497604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.305 [2024-11-19 10:42:34.497605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:45.562 "tick_rate": 2100000000, 00:14:45.562 "poll_groups": [ 00:14:45.562 { 00:14:45.562 "name": "nvmf_tgt_poll_group_000", 00:14:45.562 "admin_qpairs": 0, 00:14:45.562 "io_qpairs": 0, 00:14:45.562 "current_admin_qpairs": 0, 00:14:45.562 "current_io_qpairs": 0, 00:14:45.562 "pending_bdev_io": 0, 00:14:45.562 "completed_nvme_io": 0, 00:14:45.562 "transports": [] 00:14:45.562 }, 00:14:45.562 { 00:14:45.562 "name": "nvmf_tgt_poll_group_001", 00:14:45.562 "admin_qpairs": 0, 00:14:45.562 "io_qpairs": 0, 00:14:45.562 "current_admin_qpairs": 0, 00:14:45.562 "current_io_qpairs": 0, 00:14:45.562 "pending_bdev_io": 0, 00:14:45.562 "completed_nvme_io": 0, 00:14:45.562 "transports": [] 00:14:45.562 }, 00:14:45.562 { 00:14:45.562 "name": "nvmf_tgt_poll_group_002", 00:14:45.562 "admin_qpairs": 0, 00:14:45.562 "io_qpairs": 0, 00:14:45.562 "current_admin_qpairs": 0, 00:14:45.562 "current_io_qpairs": 0, 00:14:45.562 "pending_bdev_io": 0, 00:14:45.562 "completed_nvme_io": 0, 00:14:45.562 "transports": [] 00:14:45.562 }, 00:14:45.562 { 00:14:45.562 "name": "nvmf_tgt_poll_group_003", 00:14:45.562 "admin_qpairs": 0, 00:14:45.562 "io_qpairs": 0, 00:14:45.562 "current_admin_qpairs": 0, 00:14:45.562 "current_io_qpairs": 0, 00:14:45.562 "pending_bdev_io": 0, 00:14:45.562 "completed_nvme_io": 0, 00:14:45.562 "transports": [] 00:14:45.562 } 00:14:45.562 ] 00:14:45.562 }' 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:45.562 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:45.563 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:45.563 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.563 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.563 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.820 [2024-11-19 10:42:35.352697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.820 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.820 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:45.820 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.820 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.820 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.820 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:45.820 "tick_rate": 2100000000, 00:14:45.820 "poll_groups": [ 00:14:45.820 { 00:14:45.820 "name": "nvmf_tgt_poll_group_000", 00:14:45.820 "admin_qpairs": 0, 00:14:45.820 "io_qpairs": 0, 00:14:45.820 "current_admin_qpairs": 0, 00:14:45.820 "current_io_qpairs": 0, 00:14:45.820 "pending_bdev_io": 0, 00:14:45.820 "completed_nvme_io": 0, 00:14:45.820 "transports": [ 00:14:45.820 { 00:14:45.820 "trtype": "TCP" 00:14:45.820 } 00:14:45.820 ] 00:14:45.820 }, 00:14:45.820 { 00:14:45.820 "name": "nvmf_tgt_poll_group_001", 00:14:45.820 "admin_qpairs": 0, 00:14:45.820 "io_qpairs": 0, 00:14:45.820 "current_admin_qpairs": 0, 00:14:45.820 "current_io_qpairs": 0, 00:14:45.820 "pending_bdev_io": 0, 00:14:45.820 "completed_nvme_io": 0, 00:14:45.820 "transports": [ 00:14:45.821 { 00:14:45.821 "trtype": "TCP" 00:14:45.821 } 00:14:45.821 ] 00:14:45.821 }, 00:14:45.821 { 00:14:45.821 "name": "nvmf_tgt_poll_group_002", 00:14:45.821 "admin_qpairs": 0, 00:14:45.821 "io_qpairs": 0, 00:14:45.821 "current_admin_qpairs": 0, 00:14:45.821 "current_io_qpairs": 0, 00:14:45.821 "pending_bdev_io": 0, 00:14:45.821 "completed_nvme_io": 0, 00:14:45.821 "transports": [ 00:14:45.821 { 00:14:45.821 "trtype": "TCP" 00:14:45.821 } 00:14:45.821 ] 00:14:45.821 }, 00:14:45.821 { 00:14:45.821 "name": "nvmf_tgt_poll_group_003", 00:14:45.821 "admin_qpairs": 0, 00:14:45.821 "io_qpairs": 0, 00:14:45.821 "current_admin_qpairs": 0, 00:14:45.821 "current_io_qpairs": 0, 00:14:45.821 "pending_bdev_io": 0, 00:14:45.821 "completed_nvme_io": 0, 00:14:45.821 "transports": [ 00:14:45.821 { 00:14:45.821 "trtype": "TCP" 00:14:45.821 } 00:14:45.821 ] 00:14:45.821 } 00:14:45.821 ] 00:14:45.821 }' 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.821 Malloc1 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.821 [2024-11-19 10:42:35.522819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:45.821 [2024-11-19 10:42:35.551323] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:14:45.821 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:45.821 could not add new controller: failed to write to nvme-fabrics device 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.821 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:47.189 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:47.189 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:47.189 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.189 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:47.189 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:49.082 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:49.083 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:49.083 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.083 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:49.083 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.083 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:49.083 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.340 [2024-11-19 10:42:38.957079] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:14:49.340 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:49.340 could not add new controller: failed to write to nvme-fabrics device 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.340 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:50.717 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:50.717 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:50.717 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.717 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:50.717 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.711 [2024-11-19 10:42:42.241677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.711 10:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:53.641 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:53.641 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:53.641 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.641 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:53.641 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 [2024-11-19 10:42:45.557698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.163 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:57.092 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:57.092 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:57.092 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:57.092 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:57.092 10:42:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:58.985 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:58.986 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.242 [2024-11-19 10:42:48.815905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.242 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:59.243 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.243 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.243 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.243 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.243 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.243 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.243 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.243 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:00.173 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:00.173 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:00.173 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.173 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:00.173 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:02.698 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:02.698 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:02.698 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.698 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:02.698 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.698 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:02.698 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:02.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.698 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.699 [2024-11-19 10:42:52.116034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.699 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.630 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:03.630 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:03.630 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.630 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:03.630 10:42:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:05.525 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:05.525 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:05.525 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.525 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:05.525 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.525 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:05.525 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.783 [2024-11-19 10:42:55.412282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.783 10:42:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:07.154 10:42:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:07.154 10:42:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:07.154 10:42:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.154 10:42:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:07.154 10:42:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:09.050 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:09.050 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:09.050 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.050 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:09.050 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.050 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:09.050 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.050 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 [2024-11-19 10:42:58.679615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 [2024-11-19 10:42:58.727628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 [2024-11-19 10:42:58.775756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.051 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.052 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.052 [2024-11-19 10:42:58.823911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.052 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.052 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.052 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.052 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.052 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.052 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.052 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.052 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.309 [2024-11-19 10:42:58.872082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.309 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.310 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.310 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:09.310 "tick_rate": 2100000000, 00:15:09.310 "poll_groups": [ 00:15:09.310 { 00:15:09.310 "name": "nvmf_tgt_poll_group_000", 00:15:09.310 "admin_qpairs": 2, 00:15:09.310 "io_qpairs": 168, 00:15:09.310 "current_admin_qpairs": 0, 00:15:09.310 "current_io_qpairs": 0, 00:15:09.310 "pending_bdev_io": 0, 00:15:09.310 "completed_nvme_io": 315, 00:15:09.310 "transports": [ 00:15:09.310 { 00:15:09.310 "trtype": "TCP" 00:15:09.310 } 00:15:09.310 ] 00:15:09.310 }, 00:15:09.310 { 00:15:09.310 "name": "nvmf_tgt_poll_group_001", 00:15:09.310 "admin_qpairs": 2, 00:15:09.310 "io_qpairs": 168, 00:15:09.310 "current_admin_qpairs": 0, 00:15:09.310 "current_io_qpairs": 0, 00:15:09.310 "pending_bdev_io": 0, 00:15:09.310 "completed_nvme_io": 220, 00:15:09.310 "transports": [ 00:15:09.310 { 00:15:09.310 "trtype": "TCP" 00:15:09.310 } 00:15:09.310 ] 00:15:09.310 }, 00:15:09.310 { 00:15:09.310 "name": "nvmf_tgt_poll_group_002", 00:15:09.310 "admin_qpairs": 1, 00:15:09.310 "io_qpairs": 168, 00:15:09.310 "current_admin_qpairs": 0, 00:15:09.310 "current_io_qpairs": 0, 00:15:09.310 "pending_bdev_io": 0, 00:15:09.310 "completed_nvme_io": 268, 00:15:09.310 "transports": [ 00:15:09.310 { 00:15:09.310 "trtype": "TCP" 00:15:09.310 } 00:15:09.310 ] 00:15:09.310 }, 00:15:09.310 { 00:15:09.310 "name": "nvmf_tgt_poll_group_003", 00:15:09.310 "admin_qpairs": 2, 00:15:09.310 "io_qpairs": 168, 00:15:09.310 "current_admin_qpairs": 0, 00:15:09.310 "current_io_qpairs": 0, 00:15:09.310 "pending_bdev_io": 0, 00:15:09.310 "completed_nvme_io": 219, 00:15:09.310 "transports": [ 00:15:09.310 { 00:15:09.310 "trtype": "TCP" 00:15:09.310 } 00:15:09.310 ] 00:15:09.310 } 00:15:09.310 ] 00:15:09.310 }' 00:15:09.310 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:09.310 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:09.310 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:09.310 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:09.310 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:09.310 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:09.310 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:09.310 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:09.310 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:09.310 rmmod nvme_tcp 00:15:09.310 rmmod nvme_fabrics 00:15:09.310 rmmod nvme_keyring 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3862699 ']' 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3862699 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3862699 ']' 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3862699 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.310 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3862699 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3862699' 00:15:09.567 killing process with pid 3862699 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3862699 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3862699 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.567 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:12.102 00:15:12.102 real 0m33.294s 00:15:12.102 user 1m40.599s 00:15:12.102 sys 0m6.674s 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.102 ************************************ 00:15:12.102 END TEST nvmf_rpc 00:15:12.102 ************************************ 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.102 ************************************ 00:15:12.102 START TEST nvmf_invalid 00:15:12.102 ************************************ 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:12.102 * Looking for test storage... 00:15:12.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:12.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.102 --rc genhtml_branch_coverage=1 00:15:12.102 --rc genhtml_function_coverage=1 00:15:12.102 --rc genhtml_legend=1 00:15:12.102 --rc geninfo_all_blocks=1 00:15:12.102 --rc geninfo_unexecuted_blocks=1 00:15:12.102 00:15:12.102 ' 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:12.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.102 --rc genhtml_branch_coverage=1 00:15:12.102 --rc genhtml_function_coverage=1 00:15:12.102 --rc genhtml_legend=1 00:15:12.102 --rc geninfo_all_blocks=1 00:15:12.102 --rc geninfo_unexecuted_blocks=1 00:15:12.102 00:15:12.102 ' 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:12.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.102 --rc genhtml_branch_coverage=1 00:15:12.102 --rc genhtml_function_coverage=1 00:15:12.102 --rc genhtml_legend=1 00:15:12.102 --rc geninfo_all_blocks=1 00:15:12.102 --rc geninfo_unexecuted_blocks=1 00:15:12.102 00:15:12.102 ' 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:12.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.102 --rc genhtml_branch_coverage=1 00:15:12.102 --rc genhtml_function_coverage=1 00:15:12.102 --rc genhtml_legend=1 00:15:12.102 --rc geninfo_all_blocks=1 00:15:12.102 --rc geninfo_unexecuted_blocks=1 00:15:12.102 00:15:12.102 ' 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.102 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:12.103 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:18.693 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:18.694 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:18.694 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:18.694 Found net devices under 0000:86:00.0: cvl_0_0 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:18.694 Found net devices under 0000:86:00.1: cvl_0_1 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:18.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:15:18.694 00:15:18.694 --- 10.0.0.2 ping statistics --- 00:15:18.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.694 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:18.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:15:18.694 00:15:18.694 --- 10.0.0.1 ping statistics --- 00:15:18.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.694 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3870536 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3870536 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3870536 ']' 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.694 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:18.695 [2024-11-19 10:43:07.728150] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:15:18.695 [2024-11-19 10:43:07.728194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.695 [2024-11-19 10:43:07.806053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.695 [2024-11-19 10:43:07.850308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.695 [2024-11-19 10:43:07.850346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.695 [2024-11-19 10:43:07.850353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.695 [2024-11-19 10:43:07.850363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.695 [2024-11-19 10:43:07.850368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.695 [2024-11-19 10:43:07.851781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.695 [2024-11-19 10:43:07.851892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.695 [2024-11-19 10:43:07.852022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.695 [2024-11-19 10:43:07.852023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.695 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.695 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:15:18.695 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:18.695 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:18.695 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:18.695 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.695 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:18.695 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30419 00:15:18.695 [2024-11-19 10:43:08.152481] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:18.695 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:18.695 { 00:15:18.695 "nqn": "nqn.2016-06.io.spdk:cnode30419", 00:15:18.695 "tgt_name": "foobar", 00:15:18.695 "method": "nvmf_create_subsystem", 00:15:18.695 "req_id": 1 00:15:18.695 } 00:15:18.695 Got JSON-RPC error response 00:15:18.695 response: 00:15:18.695 { 00:15:18.695 "code": -32603, 00:15:18.695 "message": "Unable to find target foobar" 00:15:18.695 }' 00:15:18.695 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:18.695 { 00:15:18.695 "nqn": "nqn.2016-06.io.spdk:cnode30419", 00:15:18.695 "tgt_name": "foobar", 00:15:18.695 "method": "nvmf_create_subsystem", 00:15:18.695 "req_id": 1 00:15:18.695 } 00:15:18.695 Got JSON-RPC error response 00:15:18.695 response: 00:15:18.695 { 00:15:18.695 "code": -32603, 00:15:18.695 "message": "Unable to find target foobar" 00:15:18.695 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:18.695 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:18.695 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4521 00:15:18.695 [2024-11-19 10:43:08.353175] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4521: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:18.695 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:18.695 { 00:15:18.695 "nqn": "nqn.2016-06.io.spdk:cnode4521", 00:15:18.695 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:18.695 "method": "nvmf_create_subsystem", 00:15:18.695 "req_id": 1 00:15:18.695 } 00:15:18.695 Got JSON-RPC error response 00:15:18.695 response: 00:15:18.695 { 00:15:18.695 "code": -32602, 00:15:18.695 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:18.695 }' 00:15:18.695 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:18.695 { 00:15:18.695 "nqn": "nqn.2016-06.io.spdk:cnode4521", 00:15:18.695 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:18.695 "method": "nvmf_create_subsystem", 00:15:18.695 "req_id": 1 00:15:18.695 } 00:15:18.695 Got JSON-RPC error response 00:15:18.695 response: 00:15:18.695 { 00:15:18.695 "code": -32602, 00:15:18.695 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:18.695 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:18.695 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:18.695 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31600 00:15:18.955 [2024-11-19 10:43:08.561877] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31600: invalid model number 'SPDK_Controller' 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:18.955 { 00:15:18.955 "nqn": "nqn.2016-06.io.spdk:cnode31600", 00:15:18.955 "model_number": "SPDK_Controller\u001f", 00:15:18.955 "method": "nvmf_create_subsystem", 00:15:18.955 "req_id": 1 00:15:18.955 } 00:15:18.955 Got JSON-RPC error response 00:15:18.955 response: 00:15:18.955 { 00:15:18.955 "code": -32602, 00:15:18.955 "message": "Invalid MN SPDK_Controller\u001f" 00:15:18.955 }' 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:18.955 { 00:15:18.955 "nqn": "nqn.2016-06.io.spdk:cnode31600", 00:15:18.955 "model_number": "SPDK_Controller\u001f", 00:15:18.955 "method": "nvmf_create_subsystem", 00:15:18.955 "req_id": 1 00:15:18.955 } 00:15:18.955 Got JSON-RPC error response 00:15:18.955 response: 00:15:18.955 { 00:15:18.955 "code": -32602, 00:15:18.955 "message": "Invalid MN SPDK_Controller\u001f" 00:15:18.955 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:18.955 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.956 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.957 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:18.957 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:18.957 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:18.957 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.957 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.957 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:15:18.957 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'U#rS5GEz,r_h!pc2>K#vS' 00:15:18.957 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'U#rS5GEz,r_h!pc2>K#vS' nqn.2016-06.io.spdk:cnode5927 00:15:19.215 [2024-11-19 10:43:08.919075] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5927: invalid serial number 'U#rS5GEz,r_h!pc2>K#vS' 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:19.215 { 00:15:19.215 "nqn": "nqn.2016-06.io.spdk:cnode5927", 00:15:19.215 "serial_number": "U#rS5GEz,r_h!pc2>K#vS", 00:15:19.215 "method": "nvmf_create_subsystem", 00:15:19.215 "req_id": 1 00:15:19.215 } 00:15:19.215 Got JSON-RPC error response 00:15:19.215 response: 00:15:19.215 { 00:15:19.215 "code": -32602, 00:15:19.215 "message": "Invalid SN U#rS5GEz,r_h!pc2>K#vS" 00:15:19.215 }' 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:19.215 { 00:15:19.215 "nqn": "nqn.2016-06.io.spdk:cnode5927", 00:15:19.215 "serial_number": "U#rS5GEz,r_h!pc2>K#vS", 00:15:19.215 "method": "nvmf_create_subsystem", 00:15:19.215 "req_id": 1 00:15:19.215 } 00:15:19.215 Got JSON-RPC error response 00:15:19.215 response: 00:15:19.215 { 00:15:19.215 "code": -32602, 00:15:19.215 "message": "Invalid SN U#rS5GEz,r_h!pc2>K#vS" 00:15:19.215 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.215 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.215 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:19.475 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '9ASZ%Xi,9GW56d X8g`s3{>On-%DBF}$xNuORO3^' 00:15:19.476 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '9ASZ%Xi,9GW56d X8g`s3{>On-%DBF}$xNuORO3^' nqn.2016-06.io.spdk:cnode16279 00:15:19.735 [2024-11-19 10:43:09.388635] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16279: invalid model number '9ASZ%Xi,9GW56d X8g`s3{>On-%DBF}$xNuORO3^' 00:15:19.735 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:19.735 { 00:15:19.735 "nqn": "nqn.2016-06.io.spdk:cnode16279", 00:15:19.735 "model_number": "9ASZ%Xi,9GW56d X8g`s3{>On-%D\u007fBF}$xNuORO3^", 00:15:19.735 "method": "nvmf_create_subsystem", 00:15:19.735 "req_id": 1 00:15:19.735 } 00:15:19.735 Got JSON-RPC error response 00:15:19.735 response: 00:15:19.735 { 00:15:19.735 "code": -32602, 00:15:19.735 "message": "Invalid MN 9ASZ%Xi,9GW56d X8g`s3{>On-%D\u007fBF}$xNuORO3^" 00:15:19.735 }' 00:15:19.735 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:19.735 { 00:15:19.735 "nqn": "nqn.2016-06.io.spdk:cnode16279", 00:15:19.735 "model_number": "9ASZ%Xi,9GW56d X8g`s3{>On-%D\u007fBF}$xNuORO3^", 00:15:19.735 "method": "nvmf_create_subsystem", 00:15:19.735 "req_id": 1 00:15:19.735 } 00:15:19.735 Got JSON-RPC error response 00:15:19.735 response: 00:15:19.735 { 00:15:19.735 "code": -32602, 00:15:19.735 "message": "Invalid MN 9ASZ%Xi,9GW56d X8g`s3{>On-%D\u007fBF}$xNuORO3^" 00:15:19.735 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:19.735 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:19.994 [2024-11-19 10:43:09.585358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.994 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:20.251 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:20.251 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:20.251 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:20.251 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:20.251 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:20.251 [2024-11-19 10:43:09.995881] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:20.251 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:20.251 { 00:15:20.251 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:20.251 "listen_address": { 00:15:20.251 "trtype": "tcp", 00:15:20.251 "traddr": "", 00:15:20.251 "trsvcid": "4421" 00:15:20.251 }, 00:15:20.251 "method": "nvmf_subsystem_remove_listener", 00:15:20.251 "req_id": 1 00:15:20.251 } 00:15:20.251 Got JSON-RPC error response 00:15:20.251 response: 00:15:20.251 { 00:15:20.251 "code": -32602, 00:15:20.251 "message": "Invalid parameters" 00:15:20.251 }' 00:15:20.251 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:20.251 { 00:15:20.251 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:20.251 "listen_address": { 00:15:20.251 "trtype": "tcp", 00:15:20.251 "traddr": "", 00:15:20.251 "trsvcid": "4421" 00:15:20.251 }, 00:15:20.251 "method": "nvmf_subsystem_remove_listener", 00:15:20.251 "req_id": 1 00:15:20.251 } 00:15:20.251 Got JSON-RPC error response 00:15:20.251 response: 00:15:20.251 { 00:15:20.251 "code": -32602, 00:15:20.251 "message": "Invalid parameters" 00:15:20.251 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:20.251 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19257 -i 0 00:15:20.508 [2024-11-19 10:43:10.208618] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19257: invalid cntlid range [0-65519] 00:15:20.508 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:20.508 { 00:15:20.508 "nqn": "nqn.2016-06.io.spdk:cnode19257", 00:15:20.508 "min_cntlid": 0, 00:15:20.508 "method": "nvmf_create_subsystem", 00:15:20.508 "req_id": 1 00:15:20.508 } 00:15:20.508 Got JSON-RPC error response 00:15:20.508 response: 00:15:20.508 { 00:15:20.508 "code": -32602, 00:15:20.508 "message": "Invalid cntlid range [0-65519]" 00:15:20.508 }' 00:15:20.508 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:20.508 { 00:15:20.509 "nqn": "nqn.2016-06.io.spdk:cnode19257", 00:15:20.509 "min_cntlid": 0, 00:15:20.509 "method": "nvmf_create_subsystem", 00:15:20.509 "req_id": 1 00:15:20.509 } 00:15:20.509 Got JSON-RPC error response 00:15:20.509 response: 00:15:20.509 { 00:15:20.509 "code": -32602, 00:15:20.509 "message": "Invalid cntlid range [0-65519]" 00:15:20.509 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:20.509 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19614 -i 65520 00:15:20.766 [2024-11-19 10:43:10.397238] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19614: invalid cntlid range [65520-65519] 00:15:20.766 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:20.766 { 00:15:20.766 "nqn": "nqn.2016-06.io.spdk:cnode19614", 00:15:20.766 "min_cntlid": 65520, 00:15:20.766 "method": "nvmf_create_subsystem", 00:15:20.766 "req_id": 1 00:15:20.766 } 00:15:20.766 Got JSON-RPC error response 00:15:20.766 response: 00:15:20.766 { 00:15:20.766 "code": -32602, 00:15:20.766 "message": "Invalid cntlid range [65520-65519]" 00:15:20.766 }' 00:15:20.766 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:20.766 { 00:15:20.766 "nqn": "nqn.2016-06.io.spdk:cnode19614", 00:15:20.766 "min_cntlid": 65520, 00:15:20.766 "method": "nvmf_create_subsystem", 00:15:20.766 "req_id": 1 00:15:20.766 } 00:15:20.766 Got JSON-RPC error response 00:15:20.766 response: 00:15:20.766 { 00:15:20.766 "code": -32602, 00:15:20.766 "message": "Invalid cntlid range [65520-65519]" 00:15:20.766 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:20.766 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26434 -I 0 00:15:21.024 [2024-11-19 10:43:10.597889] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26434: invalid cntlid range [1-0] 00:15:21.024 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:21.024 { 00:15:21.024 "nqn": "nqn.2016-06.io.spdk:cnode26434", 00:15:21.024 "max_cntlid": 0, 00:15:21.024 "method": "nvmf_create_subsystem", 00:15:21.024 "req_id": 1 00:15:21.024 } 00:15:21.024 Got JSON-RPC error response 00:15:21.024 response: 00:15:21.024 { 00:15:21.024 "code": -32602, 00:15:21.024 "message": "Invalid cntlid range [1-0]" 00:15:21.024 }' 00:15:21.024 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:21.024 { 00:15:21.024 "nqn": "nqn.2016-06.io.spdk:cnode26434", 00:15:21.024 "max_cntlid": 0, 00:15:21.024 "method": "nvmf_create_subsystem", 00:15:21.024 "req_id": 1 00:15:21.024 } 00:15:21.024 Got JSON-RPC error response 00:15:21.024 response: 00:15:21.024 { 00:15:21.024 "code": -32602, 00:15:21.024 "message": "Invalid cntlid range [1-0]" 00:15:21.024 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:21.024 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14097 -I 65520 00:15:21.024 [2024-11-19 10:43:10.794541] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14097: invalid cntlid range [1-65520] 00:15:21.282 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:21.282 { 00:15:21.282 "nqn": "nqn.2016-06.io.spdk:cnode14097", 00:15:21.282 "max_cntlid": 65520, 00:15:21.282 "method": "nvmf_create_subsystem", 00:15:21.282 "req_id": 1 00:15:21.282 } 00:15:21.282 Got JSON-RPC error response 00:15:21.282 response: 00:15:21.282 { 00:15:21.282 "code": -32602, 00:15:21.282 "message": "Invalid cntlid range [1-65520]" 00:15:21.282 }' 00:15:21.282 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:21.282 { 00:15:21.282 "nqn": "nqn.2016-06.io.spdk:cnode14097", 00:15:21.282 "max_cntlid": 65520, 00:15:21.282 "method": "nvmf_create_subsystem", 00:15:21.282 "req_id": 1 00:15:21.282 } 00:15:21.282 Got JSON-RPC error response 00:15:21.282 response: 00:15:21.282 { 00:15:21.282 "code": -32602, 00:15:21.282 "message": "Invalid cntlid range [1-65520]" 00:15:21.282 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:21.282 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3365 -i 6 -I 5 00:15:21.282 [2024-11-19 10:43:10.995277] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3365: invalid cntlid range [6-5] 00:15:21.282 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:21.282 { 00:15:21.282 "nqn": "nqn.2016-06.io.spdk:cnode3365", 00:15:21.282 "min_cntlid": 6, 00:15:21.282 "max_cntlid": 5, 00:15:21.282 "method": "nvmf_create_subsystem", 00:15:21.282 "req_id": 1 00:15:21.282 } 00:15:21.282 Got JSON-RPC error response 00:15:21.282 response: 00:15:21.282 { 00:15:21.282 "code": -32602, 00:15:21.282 "message": "Invalid cntlid range [6-5]" 00:15:21.282 }' 00:15:21.282 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:21.282 { 00:15:21.282 "nqn": "nqn.2016-06.io.spdk:cnode3365", 00:15:21.282 "min_cntlid": 6, 00:15:21.282 "max_cntlid": 5, 00:15:21.282 "method": "nvmf_create_subsystem", 00:15:21.282 "req_id": 1 00:15:21.282 } 00:15:21.282 Got JSON-RPC error response 00:15:21.282 response: 00:15:21.282 { 00:15:21.282 "code": -32602, 00:15:21.282 "message": "Invalid cntlid range [6-5]" 00:15:21.282 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:21.282 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:21.539 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:21.539 { 00:15:21.539 "name": "foobar", 00:15:21.540 "method": "nvmf_delete_target", 00:15:21.540 "req_id": 1 00:15:21.540 } 00:15:21.540 Got JSON-RPC error response 00:15:21.540 response: 00:15:21.540 { 00:15:21.540 "code": -32602, 00:15:21.540 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:21.540 }' 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:21.540 { 00:15:21.540 "name": "foobar", 00:15:21.540 "method": "nvmf_delete_target", 00:15:21.540 "req_id": 1 00:15:21.540 } 00:15:21.540 Got JSON-RPC error response 00:15:21.540 response: 00:15:21.540 { 00:15:21.540 "code": -32602, 00:15:21.540 "message": "The specified target doesn't exist, cannot delete it." 00:15:21.540 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:21.540 rmmod nvme_tcp 00:15:21.540 rmmod nvme_fabrics 00:15:21.540 rmmod nvme_keyring 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3870536 ']' 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3870536 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3870536 ']' 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3870536 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3870536 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3870536' 00:15:21.540 killing process with pid 3870536 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3870536 00:15:21.540 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3870536 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.798 10:43:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:24.333 00:15:24.333 real 0m12.052s 00:15:24.333 user 0m18.571s 00:15:24.333 sys 0m5.401s 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:24.333 ************************************ 00:15:24.333 END TEST nvmf_invalid 00:15:24.333 ************************************ 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:24.333 ************************************ 00:15:24.333 START TEST nvmf_connect_stress 00:15:24.333 ************************************ 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:24.333 * Looking for test storage... 00:15:24.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.333 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:24.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.334 --rc genhtml_branch_coverage=1 00:15:24.334 --rc genhtml_function_coverage=1 00:15:24.334 --rc genhtml_legend=1 00:15:24.334 --rc geninfo_all_blocks=1 00:15:24.334 --rc geninfo_unexecuted_blocks=1 00:15:24.334 00:15:24.334 ' 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:24.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.334 --rc genhtml_branch_coverage=1 00:15:24.334 --rc genhtml_function_coverage=1 00:15:24.334 --rc genhtml_legend=1 00:15:24.334 --rc geninfo_all_blocks=1 00:15:24.334 --rc geninfo_unexecuted_blocks=1 00:15:24.334 00:15:24.334 ' 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:24.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.334 --rc genhtml_branch_coverage=1 00:15:24.334 --rc genhtml_function_coverage=1 00:15:24.334 --rc genhtml_legend=1 00:15:24.334 --rc geninfo_all_blocks=1 00:15:24.334 --rc geninfo_unexecuted_blocks=1 00:15:24.334 00:15:24.334 ' 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:24.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.334 --rc genhtml_branch_coverage=1 00:15:24.334 --rc genhtml_function_coverage=1 00:15:24.334 --rc genhtml_legend=1 00:15:24.334 --rc geninfo_all_blocks=1 00:15:24.334 --rc geninfo_unexecuted_blocks=1 00:15:24.334 00:15:24.334 ' 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:24.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:24.334 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:24.335 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:24.335 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.335 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.335 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.335 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:24.335 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:24.335 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:24.335 10:43:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:30.903 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:30.903 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:30.903 Found net devices under 0000:86:00.0: cvl_0_0 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:30.903 Found net devices under 0000:86:00.1: cvl_0_1 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:30.903 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:30.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:15:30.903 00:15:30.903 --- 10.0.0.2 ping statistics --- 00:15:30.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.904 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:15:30.904 00:15:30.904 --- 10.0.0.1 ping statistics --- 00:15:30.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.904 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3874713 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3874713 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3874713 ']' 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.904 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.904 [2024-11-19 10:43:19.863114] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:15:30.904 [2024-11-19 10:43:19.863158] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.904 [2024-11-19 10:43:19.943314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:30.904 [2024-11-19 10:43:19.984430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.904 [2024-11-19 10:43:19.984470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.904 [2024-11-19 10:43:19.984478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.904 [2024-11-19 10:43:19.984484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.904 [2024-11-19 10:43:19.984489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.904 [2024-11-19 10:43:19.985932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.904 [2024-11-19 10:43:19.986043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.904 [2024-11-19 10:43:19.986044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.904 [2024-11-19 10:43:20.122434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.904 [2024-11-19 10:43:20.142648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.904 NULL1 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3874885 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.904 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.161 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.161 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:31.161 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.161 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.161 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.725 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.725 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:31.725 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.725 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.725 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.982 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.982 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:31.982 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.982 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.982 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.240 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.240 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:32.240 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.240 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.240 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.497 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.497 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:32.497 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.497 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.497 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.754 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.754 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:32.754 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.754 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.754 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.318 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.318 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:33.318 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.318 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.318 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.575 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.575 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:33.575 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.575 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.575 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.832 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.832 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:33.832 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.832 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.832 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.089 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.090 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:34.090 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.090 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.090 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.654 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.654 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:34.654 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.654 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.654 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.912 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.912 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:34.912 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.912 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.912 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.170 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.170 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:35.170 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.170 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.170 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.427 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.427 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:35.427 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.427 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.427 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.683 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.683 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:35.683 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.683 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.683 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.249 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.249 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:36.249 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.249 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.249 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.506 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.506 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:36.506 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.506 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.506 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.764 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.764 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:36.764 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.764 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.764 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.021 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.021 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:37.021 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.021 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.022 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.586 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.586 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:37.586 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.586 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.586 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.857 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.857 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:37.857 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.857 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.857 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.118 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.118 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:38.118 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.118 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.118 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.376 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.376 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:38.376 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.376 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.376 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.632 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.632 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:38.632 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.632 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.632 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.196 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.196 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:39.196 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.196 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.196 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.453 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.453 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:39.453 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.453 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.453 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.710 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.710 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:39.710 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.710 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.710 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.967 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.967 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:39.967 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.967 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.967 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.224 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.224 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:40.224 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.224 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.224 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.482 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3874885 00:15:40.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3874885) - No such process 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3874885 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.740 rmmod nvme_tcp 00:15:40.740 rmmod nvme_fabrics 00:15:40.740 rmmod nvme_keyring 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3874713 ']' 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3874713 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3874713 ']' 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3874713 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3874713 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3874713' 00:15:40.740 killing process with pid 3874713 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3874713 00:15:40.740 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3874713 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.999 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:43.538 00:15:43.538 real 0m19.129s 00:15:43.538 user 0m39.587s 00:15:43.538 sys 0m8.493s 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.538 ************************************ 00:15:43.538 END TEST nvmf_connect_stress 00:15:43.538 ************************************ 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:43.538 ************************************ 00:15:43.538 START TEST nvmf_fused_ordering 00:15:43.538 ************************************ 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:43.538 * Looking for test storage... 00:15:43.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.538 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:43.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.539 --rc genhtml_branch_coverage=1 00:15:43.539 --rc genhtml_function_coverage=1 00:15:43.539 --rc genhtml_legend=1 00:15:43.539 --rc geninfo_all_blocks=1 00:15:43.539 --rc geninfo_unexecuted_blocks=1 00:15:43.539 00:15:43.539 ' 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:43.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.539 --rc genhtml_branch_coverage=1 00:15:43.539 --rc genhtml_function_coverage=1 00:15:43.539 --rc genhtml_legend=1 00:15:43.539 --rc geninfo_all_blocks=1 00:15:43.539 --rc geninfo_unexecuted_blocks=1 00:15:43.539 00:15:43.539 ' 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:43.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.539 --rc genhtml_branch_coverage=1 00:15:43.539 --rc genhtml_function_coverage=1 00:15:43.539 --rc genhtml_legend=1 00:15:43.539 --rc geninfo_all_blocks=1 00:15:43.539 --rc geninfo_unexecuted_blocks=1 00:15:43.539 00:15:43.539 ' 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:43.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.539 --rc genhtml_branch_coverage=1 00:15:43.539 --rc genhtml_function_coverage=1 00:15:43.539 --rc genhtml_legend=1 00:15:43.539 --rc geninfo_all_blocks=1 00:15:43.539 --rc geninfo_unexecuted_blocks=1 00:15:43.539 00:15:43.539 ' 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:43.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:43.539 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:43.539 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:48.921 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:48.921 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:48.921 Found net devices under 0000:86:00.0: cvl_0_0 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:48.921 Found net devices under 0000:86:00.1: cvl_0_1 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:48.921 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:48.922 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:49.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:15:49.181 00:15:49.181 --- 10.0.0.2 ping statistics --- 00:15:49.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.181 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:15:49.181 00:15:49.181 --- 10.0.0.1 ping statistics --- 00:15:49.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.181 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:49.181 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.440 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3880126 00:15:49.440 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3880126 00:15:49.440 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.440 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3880126 ']' 00:15:49.441 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.441 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.441 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.441 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.441 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.441 [2024-11-19 10:43:39.021978] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:15:49.441 [2024-11-19 10:43:39.022028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.441 [2024-11-19 10:43:39.101639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.441 [2024-11-19 10:43:39.142262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.441 [2024-11-19 10:43:39.142298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.441 [2024-11-19 10:43:39.142305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.441 [2024-11-19 10:43:39.142311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.441 [2024-11-19 10:43:39.142316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.441 [2024-11-19 10:43:39.142864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.700 [2024-11-19 10:43:39.278044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.700 [2024-11-19 10:43:39.298230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.700 NULL1 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.700 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:49.700 [2024-11-19 10:43:39.356981] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:15:49.700 [2024-11-19 10:43:39.357021] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880154 ] 00:15:49.959 Attached to nqn.2016-06.io.spdk:cnode1 00:15:49.959 Namespace ID: 1 size: 1GB 00:15:49.959 fused_ordering(0) 00:15:49.959 fused_ordering(1) 00:15:49.959 fused_ordering(2) 00:15:49.959 fused_ordering(3) 00:15:49.959 fused_ordering(4) 00:15:49.959 fused_ordering(5) 00:15:49.959 fused_ordering(6) 00:15:49.959 fused_ordering(7) 00:15:49.959 fused_ordering(8) 00:15:49.959 fused_ordering(9) 00:15:49.959 fused_ordering(10) 00:15:49.959 fused_ordering(11) 00:15:49.959 fused_ordering(12) 00:15:49.959 fused_ordering(13) 00:15:49.959 fused_ordering(14) 00:15:49.959 fused_ordering(15) 00:15:49.959 fused_ordering(16) 00:15:49.959 fused_ordering(17) 00:15:49.959 fused_ordering(18) 00:15:49.959 fused_ordering(19) 00:15:49.959 fused_ordering(20) 00:15:49.959 fused_ordering(21) 00:15:49.959 fused_ordering(22) 00:15:49.959 fused_ordering(23) 00:15:49.959 fused_ordering(24) 00:15:49.959 fused_ordering(25) 00:15:49.959 fused_ordering(26) 00:15:49.959 fused_ordering(27) 00:15:49.959 fused_ordering(28) 00:15:49.959 fused_ordering(29) 00:15:49.959 fused_ordering(30) 00:15:49.959 fused_ordering(31) 00:15:49.959 fused_ordering(32) 00:15:49.959 fused_ordering(33) 00:15:49.959 fused_ordering(34) 00:15:49.959 fused_ordering(35) 00:15:49.959 fused_ordering(36) 00:15:49.959 fused_ordering(37) 00:15:49.959 fused_ordering(38) 00:15:49.959 fused_ordering(39) 00:15:49.959 fused_ordering(40) 00:15:49.959 fused_ordering(41) 00:15:49.959 fused_ordering(42) 00:15:49.959 fused_ordering(43) 00:15:49.959 fused_ordering(44) 00:15:49.959 fused_ordering(45) 00:15:49.959 fused_ordering(46) 00:15:49.959 fused_ordering(47) 00:15:49.959 fused_ordering(48) 00:15:49.959 fused_ordering(49) 00:15:49.959 fused_ordering(50) 00:15:49.959 fused_ordering(51) 00:15:49.959 fused_ordering(52) 00:15:49.959 fused_ordering(53) 00:15:49.959 fused_ordering(54) 00:15:49.959 fused_ordering(55) 00:15:49.959 fused_ordering(56) 00:15:49.959 fused_ordering(57) 00:15:49.959 fused_ordering(58) 00:15:49.959 fused_ordering(59) 00:15:49.959 fused_ordering(60) 00:15:49.959 fused_ordering(61) 00:15:49.959 fused_ordering(62) 00:15:49.959 fused_ordering(63) 00:15:49.959 fused_ordering(64) 00:15:49.959 fused_ordering(65) 00:15:49.959 fused_ordering(66) 00:15:49.959 fused_ordering(67) 00:15:49.959 fused_ordering(68) 00:15:49.959 fused_ordering(69) 00:15:49.959 fused_ordering(70) 00:15:49.959 fused_ordering(71) 00:15:49.959 fused_ordering(72) 00:15:49.959 fused_ordering(73) 00:15:49.959 fused_ordering(74) 00:15:49.959 fused_ordering(75) 00:15:49.959 fused_ordering(76) 00:15:49.959 fused_ordering(77) 00:15:49.959 fused_ordering(78) 00:15:49.959 fused_ordering(79) 00:15:49.959 fused_ordering(80) 00:15:49.959 fused_ordering(81) 00:15:49.959 fused_ordering(82) 00:15:49.959 fused_ordering(83) 00:15:49.959 fused_ordering(84) 00:15:49.959 fused_ordering(85) 00:15:49.959 fused_ordering(86) 00:15:49.959 fused_ordering(87) 00:15:49.959 fused_ordering(88) 00:15:49.959 fused_ordering(89) 00:15:49.959 fused_ordering(90) 00:15:49.959 fused_ordering(91) 00:15:49.959 fused_ordering(92) 00:15:49.959 fused_ordering(93) 00:15:49.959 fused_ordering(94) 00:15:49.959 fused_ordering(95) 00:15:49.959 fused_ordering(96) 00:15:49.959 fused_ordering(97) 00:15:49.959 fused_ordering(98) 00:15:49.959 fused_ordering(99) 00:15:49.959 fused_ordering(100) 00:15:49.959 fused_ordering(101) 00:15:49.959 fused_ordering(102) 00:15:49.959 fused_ordering(103) 00:15:49.959 fused_ordering(104) 00:15:49.959 fused_ordering(105) 00:15:49.959 fused_ordering(106) 00:15:49.959 fused_ordering(107) 00:15:49.959 fused_ordering(108) 00:15:49.959 fused_ordering(109) 00:15:49.959 fused_ordering(110) 00:15:49.959 fused_ordering(111) 00:15:49.960 fused_ordering(112) 00:15:49.960 fused_ordering(113) 00:15:49.960 fused_ordering(114) 00:15:49.960 fused_ordering(115) 00:15:49.960 fused_ordering(116) 00:15:49.960 fused_ordering(117) 00:15:49.960 fused_ordering(118) 00:15:49.960 fused_ordering(119) 00:15:49.960 fused_ordering(120) 00:15:49.960 fused_ordering(121) 00:15:49.960 fused_ordering(122) 00:15:49.960 fused_ordering(123) 00:15:49.960 fused_ordering(124) 00:15:49.960 fused_ordering(125) 00:15:49.960 fused_ordering(126) 00:15:49.960 fused_ordering(127) 00:15:49.960 fused_ordering(128) 00:15:49.960 fused_ordering(129) 00:15:49.960 fused_ordering(130) 00:15:49.960 fused_ordering(131) 00:15:49.960 fused_ordering(132) 00:15:49.960 fused_ordering(133) 00:15:49.960 fused_ordering(134) 00:15:49.960 fused_ordering(135) 00:15:49.960 fused_ordering(136) 00:15:49.960 fused_ordering(137) 00:15:49.960 fused_ordering(138) 00:15:49.960 fused_ordering(139) 00:15:49.960 fused_ordering(140) 00:15:49.960 fused_ordering(141) 00:15:49.960 fused_ordering(142) 00:15:49.960 fused_ordering(143) 00:15:49.960 fused_ordering(144) 00:15:49.960 fused_ordering(145) 00:15:49.960 fused_ordering(146) 00:15:49.960 fused_ordering(147) 00:15:49.960 fused_ordering(148) 00:15:49.960 fused_ordering(149) 00:15:49.960 fused_ordering(150) 00:15:49.960 fused_ordering(151) 00:15:49.960 fused_ordering(152) 00:15:49.960 fused_ordering(153) 00:15:49.960 fused_ordering(154) 00:15:49.960 fused_ordering(155) 00:15:49.960 fused_ordering(156) 00:15:49.960 fused_ordering(157) 00:15:49.960 fused_ordering(158) 00:15:49.960 fused_ordering(159) 00:15:49.960 fused_ordering(160) 00:15:49.960 fused_ordering(161) 00:15:49.960 fused_ordering(162) 00:15:49.960 fused_ordering(163) 00:15:49.960 fused_ordering(164) 00:15:49.960 fused_ordering(165) 00:15:49.960 fused_ordering(166) 00:15:49.960 fused_ordering(167) 00:15:49.960 fused_ordering(168) 00:15:49.960 fused_ordering(169) 00:15:49.960 fused_ordering(170) 00:15:49.960 fused_ordering(171) 00:15:49.960 fused_ordering(172) 00:15:49.960 fused_ordering(173) 00:15:49.960 fused_ordering(174) 00:15:49.960 fused_ordering(175) 00:15:49.960 fused_ordering(176) 00:15:49.960 fused_ordering(177) 00:15:49.960 fused_ordering(178) 00:15:49.960 fused_ordering(179) 00:15:49.960 fused_ordering(180) 00:15:49.960 fused_ordering(181) 00:15:49.960 fused_ordering(182) 00:15:49.960 fused_ordering(183) 00:15:49.960 fused_ordering(184) 00:15:49.960 fused_ordering(185) 00:15:49.960 fused_ordering(186) 00:15:49.960 fused_ordering(187) 00:15:49.960 fused_ordering(188) 00:15:49.960 fused_ordering(189) 00:15:49.960 fused_ordering(190) 00:15:49.960 fused_ordering(191) 00:15:49.960 fused_ordering(192) 00:15:49.960 fused_ordering(193) 00:15:49.960 fused_ordering(194) 00:15:49.960 fused_ordering(195) 00:15:49.960 fused_ordering(196) 00:15:49.960 fused_ordering(197) 00:15:49.960 fused_ordering(198) 00:15:49.960 fused_ordering(199) 00:15:49.960 fused_ordering(200) 00:15:49.960 fused_ordering(201) 00:15:49.960 fused_ordering(202) 00:15:49.960 fused_ordering(203) 00:15:49.960 fused_ordering(204) 00:15:49.960 fused_ordering(205) 00:15:50.219 fused_ordering(206) 00:15:50.219 fused_ordering(207) 00:15:50.219 fused_ordering(208) 00:15:50.219 fused_ordering(209) 00:15:50.219 fused_ordering(210) 00:15:50.219 fused_ordering(211) 00:15:50.219 fused_ordering(212) 00:15:50.219 fused_ordering(213) 00:15:50.219 fused_ordering(214) 00:15:50.219 fused_ordering(215) 00:15:50.219 fused_ordering(216) 00:15:50.219 fused_ordering(217) 00:15:50.219 fused_ordering(218) 00:15:50.219 fused_ordering(219) 00:15:50.219 fused_ordering(220) 00:15:50.219 fused_ordering(221) 00:15:50.219 fused_ordering(222) 00:15:50.219 fused_ordering(223) 00:15:50.219 fused_ordering(224) 00:15:50.219 fused_ordering(225) 00:15:50.219 fused_ordering(226) 00:15:50.219 fused_ordering(227) 00:15:50.219 fused_ordering(228) 00:15:50.219 fused_ordering(229) 00:15:50.219 fused_ordering(230) 00:15:50.219 fused_ordering(231) 00:15:50.219 fused_ordering(232) 00:15:50.219 fused_ordering(233) 00:15:50.219 fused_ordering(234) 00:15:50.219 fused_ordering(235) 00:15:50.219 fused_ordering(236) 00:15:50.219 fused_ordering(237) 00:15:50.219 fused_ordering(238) 00:15:50.219 fused_ordering(239) 00:15:50.219 fused_ordering(240) 00:15:50.219 fused_ordering(241) 00:15:50.219 fused_ordering(242) 00:15:50.219 fused_ordering(243) 00:15:50.219 fused_ordering(244) 00:15:50.219 fused_ordering(245) 00:15:50.219 fused_ordering(246) 00:15:50.219 fused_ordering(247) 00:15:50.219 fused_ordering(248) 00:15:50.219 fused_ordering(249) 00:15:50.219 fused_ordering(250) 00:15:50.219 fused_ordering(251) 00:15:50.219 fused_ordering(252) 00:15:50.219 fused_ordering(253) 00:15:50.219 fused_ordering(254) 00:15:50.219 fused_ordering(255) 00:15:50.219 fused_ordering(256) 00:15:50.219 fused_ordering(257) 00:15:50.219 fused_ordering(258) 00:15:50.219 fused_ordering(259) 00:15:50.219 fused_ordering(260) 00:15:50.219 fused_ordering(261) 00:15:50.219 fused_ordering(262) 00:15:50.219 fused_ordering(263) 00:15:50.219 fused_ordering(264) 00:15:50.219 fused_ordering(265) 00:15:50.219 fused_ordering(266) 00:15:50.219 fused_ordering(267) 00:15:50.219 fused_ordering(268) 00:15:50.219 fused_ordering(269) 00:15:50.219 fused_ordering(270) 00:15:50.219 fused_ordering(271) 00:15:50.219 fused_ordering(272) 00:15:50.219 fused_ordering(273) 00:15:50.219 fused_ordering(274) 00:15:50.219 fused_ordering(275) 00:15:50.219 fused_ordering(276) 00:15:50.219 fused_ordering(277) 00:15:50.219 fused_ordering(278) 00:15:50.219 fused_ordering(279) 00:15:50.219 fused_ordering(280) 00:15:50.219 fused_ordering(281) 00:15:50.219 fused_ordering(282) 00:15:50.219 fused_ordering(283) 00:15:50.219 fused_ordering(284) 00:15:50.219 fused_ordering(285) 00:15:50.219 fused_ordering(286) 00:15:50.219 fused_ordering(287) 00:15:50.219 fused_ordering(288) 00:15:50.219 fused_ordering(289) 00:15:50.219 fused_ordering(290) 00:15:50.219 fused_ordering(291) 00:15:50.219 fused_ordering(292) 00:15:50.219 fused_ordering(293) 00:15:50.219 fused_ordering(294) 00:15:50.219 fused_ordering(295) 00:15:50.219 fused_ordering(296) 00:15:50.219 fused_ordering(297) 00:15:50.219 fused_ordering(298) 00:15:50.219 fused_ordering(299) 00:15:50.219 fused_ordering(300) 00:15:50.219 fused_ordering(301) 00:15:50.219 fused_ordering(302) 00:15:50.219 fused_ordering(303) 00:15:50.219 fused_ordering(304) 00:15:50.219 fused_ordering(305) 00:15:50.219 fused_ordering(306) 00:15:50.219 fused_ordering(307) 00:15:50.219 fused_ordering(308) 00:15:50.219 fused_ordering(309) 00:15:50.219 fused_ordering(310) 00:15:50.219 fused_ordering(311) 00:15:50.219 fused_ordering(312) 00:15:50.219 fused_ordering(313) 00:15:50.219 fused_ordering(314) 00:15:50.219 fused_ordering(315) 00:15:50.219 fused_ordering(316) 00:15:50.219 fused_ordering(317) 00:15:50.219 fused_ordering(318) 00:15:50.219 fused_ordering(319) 00:15:50.219 fused_ordering(320) 00:15:50.219 fused_ordering(321) 00:15:50.219 fused_ordering(322) 00:15:50.219 fused_ordering(323) 00:15:50.219 fused_ordering(324) 00:15:50.219 fused_ordering(325) 00:15:50.219 fused_ordering(326) 00:15:50.219 fused_ordering(327) 00:15:50.219 fused_ordering(328) 00:15:50.219 fused_ordering(329) 00:15:50.219 fused_ordering(330) 00:15:50.219 fused_ordering(331) 00:15:50.219 fused_ordering(332) 00:15:50.219 fused_ordering(333) 00:15:50.219 fused_ordering(334) 00:15:50.219 fused_ordering(335) 00:15:50.219 fused_ordering(336) 00:15:50.219 fused_ordering(337) 00:15:50.219 fused_ordering(338) 00:15:50.219 fused_ordering(339) 00:15:50.219 fused_ordering(340) 00:15:50.219 fused_ordering(341) 00:15:50.219 fused_ordering(342) 00:15:50.219 fused_ordering(343) 00:15:50.219 fused_ordering(344) 00:15:50.219 fused_ordering(345) 00:15:50.219 fused_ordering(346) 00:15:50.219 fused_ordering(347) 00:15:50.219 fused_ordering(348) 00:15:50.219 fused_ordering(349) 00:15:50.219 fused_ordering(350) 00:15:50.219 fused_ordering(351) 00:15:50.219 fused_ordering(352) 00:15:50.219 fused_ordering(353) 00:15:50.219 fused_ordering(354) 00:15:50.219 fused_ordering(355) 00:15:50.219 fused_ordering(356) 00:15:50.219 fused_ordering(357) 00:15:50.219 fused_ordering(358) 00:15:50.219 fused_ordering(359) 00:15:50.219 fused_ordering(360) 00:15:50.219 fused_ordering(361) 00:15:50.219 fused_ordering(362) 00:15:50.219 fused_ordering(363) 00:15:50.219 fused_ordering(364) 00:15:50.219 fused_ordering(365) 00:15:50.219 fused_ordering(366) 00:15:50.219 fused_ordering(367) 00:15:50.219 fused_ordering(368) 00:15:50.219 fused_ordering(369) 00:15:50.219 fused_ordering(370) 00:15:50.219 fused_ordering(371) 00:15:50.219 fused_ordering(372) 00:15:50.219 fused_ordering(373) 00:15:50.219 fused_ordering(374) 00:15:50.219 fused_ordering(375) 00:15:50.219 fused_ordering(376) 00:15:50.219 fused_ordering(377) 00:15:50.219 fused_ordering(378) 00:15:50.219 fused_ordering(379) 00:15:50.219 fused_ordering(380) 00:15:50.219 fused_ordering(381) 00:15:50.219 fused_ordering(382) 00:15:50.219 fused_ordering(383) 00:15:50.219 fused_ordering(384) 00:15:50.219 fused_ordering(385) 00:15:50.219 fused_ordering(386) 00:15:50.219 fused_ordering(387) 00:15:50.219 fused_ordering(388) 00:15:50.219 fused_ordering(389) 00:15:50.219 fused_ordering(390) 00:15:50.219 fused_ordering(391) 00:15:50.219 fused_ordering(392) 00:15:50.219 fused_ordering(393) 00:15:50.219 fused_ordering(394) 00:15:50.219 fused_ordering(395) 00:15:50.219 fused_ordering(396) 00:15:50.219 fused_ordering(397) 00:15:50.219 fused_ordering(398) 00:15:50.219 fused_ordering(399) 00:15:50.219 fused_ordering(400) 00:15:50.219 fused_ordering(401) 00:15:50.219 fused_ordering(402) 00:15:50.219 fused_ordering(403) 00:15:50.219 fused_ordering(404) 00:15:50.219 fused_ordering(405) 00:15:50.219 fused_ordering(406) 00:15:50.220 fused_ordering(407) 00:15:50.220 fused_ordering(408) 00:15:50.220 fused_ordering(409) 00:15:50.220 fused_ordering(410) 00:15:50.479 fused_ordering(411) 00:15:50.479 fused_ordering(412) 00:15:50.479 fused_ordering(413) 00:15:50.479 fused_ordering(414) 00:15:50.479 fused_ordering(415) 00:15:50.479 fused_ordering(416) 00:15:50.479 fused_ordering(417) 00:15:50.479 fused_ordering(418) 00:15:50.479 fused_ordering(419) 00:15:50.479 fused_ordering(420) 00:15:50.479 fused_ordering(421) 00:15:50.479 fused_ordering(422) 00:15:50.479 fused_ordering(423) 00:15:50.479 fused_ordering(424) 00:15:50.479 fused_ordering(425) 00:15:50.479 fused_ordering(426) 00:15:50.479 fused_ordering(427) 00:15:50.479 fused_ordering(428) 00:15:50.479 fused_ordering(429) 00:15:50.479 fused_ordering(430) 00:15:50.479 fused_ordering(431) 00:15:50.479 fused_ordering(432) 00:15:50.479 fused_ordering(433) 00:15:50.479 fused_ordering(434) 00:15:50.479 fused_ordering(435) 00:15:50.479 fused_ordering(436) 00:15:50.479 fused_ordering(437) 00:15:50.479 fused_ordering(438) 00:15:50.479 fused_ordering(439) 00:15:50.479 fused_ordering(440) 00:15:50.479 fused_ordering(441) 00:15:50.479 fused_ordering(442) 00:15:50.479 fused_ordering(443) 00:15:50.479 fused_ordering(444) 00:15:50.479 fused_ordering(445) 00:15:50.479 fused_ordering(446) 00:15:50.479 fused_ordering(447) 00:15:50.479 fused_ordering(448) 00:15:50.479 fused_ordering(449) 00:15:50.479 fused_ordering(450) 00:15:50.479 fused_ordering(451) 00:15:50.479 fused_ordering(452) 00:15:50.479 fused_ordering(453) 00:15:50.479 fused_ordering(454) 00:15:50.479 fused_ordering(455) 00:15:50.479 fused_ordering(456) 00:15:50.479 fused_ordering(457) 00:15:50.479 fused_ordering(458) 00:15:50.479 fused_ordering(459) 00:15:50.479 fused_ordering(460) 00:15:50.479 fused_ordering(461) 00:15:50.479 fused_ordering(462) 00:15:50.479 fused_ordering(463) 00:15:50.479 fused_ordering(464) 00:15:50.479 fused_ordering(465) 00:15:50.479 fused_ordering(466) 00:15:50.479 fused_ordering(467) 00:15:50.479 fused_ordering(468) 00:15:50.479 fused_ordering(469) 00:15:50.479 fused_ordering(470) 00:15:50.479 fused_ordering(471) 00:15:50.479 fused_ordering(472) 00:15:50.479 fused_ordering(473) 00:15:50.479 fused_ordering(474) 00:15:50.479 fused_ordering(475) 00:15:50.479 fused_ordering(476) 00:15:50.479 fused_ordering(477) 00:15:50.479 fused_ordering(478) 00:15:50.479 fused_ordering(479) 00:15:50.479 fused_ordering(480) 00:15:50.479 fused_ordering(481) 00:15:50.479 fused_ordering(482) 00:15:50.479 fused_ordering(483) 00:15:50.479 fused_ordering(484) 00:15:50.479 fused_ordering(485) 00:15:50.479 fused_ordering(486) 00:15:50.479 fused_ordering(487) 00:15:50.479 fused_ordering(488) 00:15:50.479 fused_ordering(489) 00:15:50.479 fused_ordering(490) 00:15:50.479 fused_ordering(491) 00:15:50.479 fused_ordering(492) 00:15:50.479 fused_ordering(493) 00:15:50.479 fused_ordering(494) 00:15:50.479 fused_ordering(495) 00:15:50.479 fused_ordering(496) 00:15:50.479 fused_ordering(497) 00:15:50.479 fused_ordering(498) 00:15:50.479 fused_ordering(499) 00:15:50.479 fused_ordering(500) 00:15:50.479 fused_ordering(501) 00:15:50.479 fused_ordering(502) 00:15:50.479 fused_ordering(503) 00:15:50.479 fused_ordering(504) 00:15:50.479 fused_ordering(505) 00:15:50.479 fused_ordering(506) 00:15:50.479 fused_ordering(507) 00:15:50.479 fused_ordering(508) 00:15:50.479 fused_ordering(509) 00:15:50.479 fused_ordering(510) 00:15:50.479 fused_ordering(511) 00:15:50.479 fused_ordering(512) 00:15:50.479 fused_ordering(513) 00:15:50.479 fused_ordering(514) 00:15:50.479 fused_ordering(515) 00:15:50.479 fused_ordering(516) 00:15:50.479 fused_ordering(517) 00:15:50.479 fused_ordering(518) 00:15:50.479 fused_ordering(519) 00:15:50.479 fused_ordering(520) 00:15:50.479 fused_ordering(521) 00:15:50.479 fused_ordering(522) 00:15:50.479 fused_ordering(523) 00:15:50.479 fused_ordering(524) 00:15:50.479 fused_ordering(525) 00:15:50.479 fused_ordering(526) 00:15:50.479 fused_ordering(527) 00:15:50.479 fused_ordering(528) 00:15:50.479 fused_ordering(529) 00:15:50.479 fused_ordering(530) 00:15:50.479 fused_ordering(531) 00:15:50.479 fused_ordering(532) 00:15:50.479 fused_ordering(533) 00:15:50.479 fused_ordering(534) 00:15:50.479 fused_ordering(535) 00:15:50.479 fused_ordering(536) 00:15:50.479 fused_ordering(537) 00:15:50.479 fused_ordering(538) 00:15:50.479 fused_ordering(539) 00:15:50.479 fused_ordering(540) 00:15:50.479 fused_ordering(541) 00:15:50.479 fused_ordering(542) 00:15:50.479 fused_ordering(543) 00:15:50.479 fused_ordering(544) 00:15:50.479 fused_ordering(545) 00:15:50.479 fused_ordering(546) 00:15:50.479 fused_ordering(547) 00:15:50.479 fused_ordering(548) 00:15:50.479 fused_ordering(549) 00:15:50.479 fused_ordering(550) 00:15:50.479 fused_ordering(551) 00:15:50.479 fused_ordering(552) 00:15:50.479 fused_ordering(553) 00:15:50.479 fused_ordering(554) 00:15:50.479 fused_ordering(555) 00:15:50.479 fused_ordering(556) 00:15:50.479 fused_ordering(557) 00:15:50.479 fused_ordering(558) 00:15:50.479 fused_ordering(559) 00:15:50.479 fused_ordering(560) 00:15:50.479 fused_ordering(561) 00:15:50.479 fused_ordering(562) 00:15:50.479 fused_ordering(563) 00:15:50.479 fused_ordering(564) 00:15:50.479 fused_ordering(565) 00:15:50.479 fused_ordering(566) 00:15:50.479 fused_ordering(567) 00:15:50.479 fused_ordering(568) 00:15:50.479 fused_ordering(569) 00:15:50.479 fused_ordering(570) 00:15:50.479 fused_ordering(571) 00:15:50.479 fused_ordering(572) 00:15:50.479 fused_ordering(573) 00:15:50.479 fused_ordering(574) 00:15:50.479 fused_ordering(575) 00:15:50.479 fused_ordering(576) 00:15:50.479 fused_ordering(577) 00:15:50.479 fused_ordering(578) 00:15:50.479 fused_ordering(579) 00:15:50.479 fused_ordering(580) 00:15:50.479 fused_ordering(581) 00:15:50.479 fused_ordering(582) 00:15:50.479 fused_ordering(583) 00:15:50.479 fused_ordering(584) 00:15:50.479 fused_ordering(585) 00:15:50.479 fused_ordering(586) 00:15:50.479 fused_ordering(587) 00:15:50.479 fused_ordering(588) 00:15:50.479 fused_ordering(589) 00:15:50.479 fused_ordering(590) 00:15:50.479 fused_ordering(591) 00:15:50.479 fused_ordering(592) 00:15:50.479 fused_ordering(593) 00:15:50.480 fused_ordering(594) 00:15:50.480 fused_ordering(595) 00:15:50.480 fused_ordering(596) 00:15:50.480 fused_ordering(597) 00:15:50.480 fused_ordering(598) 00:15:50.480 fused_ordering(599) 00:15:50.480 fused_ordering(600) 00:15:50.480 fused_ordering(601) 00:15:50.480 fused_ordering(602) 00:15:50.480 fused_ordering(603) 00:15:50.480 fused_ordering(604) 00:15:50.480 fused_ordering(605) 00:15:50.480 fused_ordering(606) 00:15:50.480 fused_ordering(607) 00:15:50.480 fused_ordering(608) 00:15:50.480 fused_ordering(609) 00:15:50.480 fused_ordering(610) 00:15:50.480 fused_ordering(611) 00:15:50.480 fused_ordering(612) 00:15:50.480 fused_ordering(613) 00:15:50.480 fused_ordering(614) 00:15:50.480 fused_ordering(615) 00:15:51.046 fused_ordering(616) 00:15:51.046 fused_ordering(617) 00:15:51.046 fused_ordering(618) 00:15:51.046 fused_ordering(619) 00:15:51.046 fused_ordering(620) 00:15:51.046 fused_ordering(621) 00:15:51.046 fused_ordering(622) 00:15:51.046 fused_ordering(623) 00:15:51.046 fused_ordering(624) 00:15:51.046 fused_ordering(625) 00:15:51.046 fused_ordering(626) 00:15:51.046 fused_ordering(627) 00:15:51.046 fused_ordering(628) 00:15:51.046 fused_ordering(629) 00:15:51.046 fused_ordering(630) 00:15:51.046 fused_ordering(631) 00:15:51.046 fused_ordering(632) 00:15:51.046 fused_ordering(633) 00:15:51.046 fused_ordering(634) 00:15:51.046 fused_ordering(635) 00:15:51.046 fused_ordering(636) 00:15:51.046 fused_ordering(637) 00:15:51.046 fused_ordering(638) 00:15:51.046 fused_ordering(639) 00:15:51.046 fused_ordering(640) 00:15:51.046 fused_ordering(641) 00:15:51.046 fused_ordering(642) 00:15:51.046 fused_ordering(643) 00:15:51.046 fused_ordering(644) 00:15:51.046 fused_ordering(645) 00:15:51.046 fused_ordering(646) 00:15:51.046 fused_ordering(647) 00:15:51.046 fused_ordering(648) 00:15:51.046 fused_ordering(649) 00:15:51.046 fused_ordering(650) 00:15:51.046 fused_ordering(651) 00:15:51.046 fused_ordering(652) 00:15:51.046 fused_ordering(653) 00:15:51.046 fused_ordering(654) 00:15:51.046 fused_ordering(655) 00:15:51.046 fused_ordering(656) 00:15:51.046 fused_ordering(657) 00:15:51.047 fused_ordering(658) 00:15:51.047 fused_ordering(659) 00:15:51.047 fused_ordering(660) 00:15:51.047 fused_ordering(661) 00:15:51.047 fused_ordering(662) 00:15:51.047 fused_ordering(663) 00:15:51.047 fused_ordering(664) 00:15:51.047 fused_ordering(665) 00:15:51.047 fused_ordering(666) 00:15:51.047 fused_ordering(667) 00:15:51.047 fused_ordering(668) 00:15:51.047 fused_ordering(669) 00:15:51.047 fused_ordering(670) 00:15:51.047 fused_ordering(671) 00:15:51.047 fused_ordering(672) 00:15:51.047 fused_ordering(673) 00:15:51.047 fused_ordering(674) 00:15:51.047 fused_ordering(675) 00:15:51.047 fused_ordering(676) 00:15:51.047 fused_ordering(677) 00:15:51.047 fused_ordering(678) 00:15:51.047 fused_ordering(679) 00:15:51.047 fused_ordering(680) 00:15:51.047 fused_ordering(681) 00:15:51.047 fused_ordering(682) 00:15:51.047 fused_ordering(683) 00:15:51.047 fused_ordering(684) 00:15:51.047 fused_ordering(685) 00:15:51.047 fused_ordering(686) 00:15:51.047 fused_ordering(687) 00:15:51.047 fused_ordering(688) 00:15:51.047 fused_ordering(689) 00:15:51.047 fused_ordering(690) 00:15:51.047 fused_ordering(691) 00:15:51.047 fused_ordering(692) 00:15:51.047 fused_ordering(693) 00:15:51.047 fused_ordering(694) 00:15:51.047 fused_ordering(695) 00:15:51.047 fused_ordering(696) 00:15:51.047 fused_ordering(697) 00:15:51.047 fused_ordering(698) 00:15:51.047 fused_ordering(699) 00:15:51.047 fused_ordering(700) 00:15:51.047 fused_ordering(701) 00:15:51.047 fused_ordering(702) 00:15:51.047 fused_ordering(703) 00:15:51.047 fused_ordering(704) 00:15:51.047 fused_ordering(705) 00:15:51.047 fused_ordering(706) 00:15:51.047 fused_ordering(707) 00:15:51.047 fused_ordering(708) 00:15:51.047 fused_ordering(709) 00:15:51.047 fused_ordering(710) 00:15:51.047 fused_ordering(711) 00:15:51.047 fused_ordering(712) 00:15:51.047 fused_ordering(713) 00:15:51.047 fused_ordering(714) 00:15:51.047 fused_ordering(715) 00:15:51.047 fused_ordering(716) 00:15:51.047 fused_ordering(717) 00:15:51.047 fused_ordering(718) 00:15:51.047 fused_ordering(719) 00:15:51.047 fused_ordering(720) 00:15:51.047 fused_ordering(721) 00:15:51.047 fused_ordering(722) 00:15:51.047 fused_ordering(723) 00:15:51.047 fused_ordering(724) 00:15:51.047 fused_ordering(725) 00:15:51.047 fused_ordering(726) 00:15:51.047 fused_ordering(727) 00:15:51.047 fused_ordering(728) 00:15:51.047 fused_ordering(729) 00:15:51.047 fused_ordering(730) 00:15:51.047 fused_ordering(731) 00:15:51.047 fused_ordering(732) 00:15:51.047 fused_ordering(733) 00:15:51.047 fused_ordering(734) 00:15:51.047 fused_ordering(735) 00:15:51.047 fused_ordering(736) 00:15:51.047 fused_ordering(737) 00:15:51.047 fused_ordering(738) 00:15:51.047 fused_ordering(739) 00:15:51.047 fused_ordering(740) 00:15:51.047 fused_ordering(741) 00:15:51.047 fused_ordering(742) 00:15:51.047 fused_ordering(743) 00:15:51.047 fused_ordering(744) 00:15:51.047 fused_ordering(745) 00:15:51.047 fused_ordering(746) 00:15:51.047 fused_ordering(747) 00:15:51.047 fused_ordering(748) 00:15:51.047 fused_ordering(749) 00:15:51.047 fused_ordering(750) 00:15:51.047 fused_ordering(751) 00:15:51.047 fused_ordering(752) 00:15:51.047 fused_ordering(753) 00:15:51.047 fused_ordering(754) 00:15:51.047 fused_ordering(755) 00:15:51.047 fused_ordering(756) 00:15:51.047 fused_ordering(757) 00:15:51.047 fused_ordering(758) 00:15:51.047 fused_ordering(759) 00:15:51.047 fused_ordering(760) 00:15:51.047 fused_ordering(761) 00:15:51.047 fused_ordering(762) 00:15:51.047 fused_ordering(763) 00:15:51.047 fused_ordering(764) 00:15:51.047 fused_ordering(765) 00:15:51.047 fused_ordering(766) 00:15:51.047 fused_ordering(767) 00:15:51.047 fused_ordering(768) 00:15:51.047 fused_ordering(769) 00:15:51.047 fused_ordering(770) 00:15:51.047 fused_ordering(771) 00:15:51.047 fused_ordering(772) 00:15:51.047 fused_ordering(773) 00:15:51.047 fused_ordering(774) 00:15:51.047 fused_ordering(775) 00:15:51.047 fused_ordering(776) 00:15:51.047 fused_ordering(777) 00:15:51.047 fused_ordering(778) 00:15:51.047 fused_ordering(779) 00:15:51.047 fused_ordering(780) 00:15:51.047 fused_ordering(781) 00:15:51.047 fused_ordering(782) 00:15:51.047 fused_ordering(783) 00:15:51.047 fused_ordering(784) 00:15:51.047 fused_ordering(785) 00:15:51.047 fused_ordering(786) 00:15:51.047 fused_ordering(787) 00:15:51.047 fused_ordering(788) 00:15:51.047 fused_ordering(789) 00:15:51.047 fused_ordering(790) 00:15:51.047 fused_ordering(791) 00:15:51.047 fused_ordering(792) 00:15:51.047 fused_ordering(793) 00:15:51.047 fused_ordering(794) 00:15:51.047 fused_ordering(795) 00:15:51.047 fused_ordering(796) 00:15:51.047 fused_ordering(797) 00:15:51.047 fused_ordering(798) 00:15:51.047 fused_ordering(799) 00:15:51.047 fused_ordering(800) 00:15:51.047 fused_ordering(801) 00:15:51.047 fused_ordering(802) 00:15:51.047 fused_ordering(803) 00:15:51.047 fused_ordering(804) 00:15:51.047 fused_ordering(805) 00:15:51.047 fused_ordering(806) 00:15:51.047 fused_ordering(807) 00:15:51.047 fused_ordering(808) 00:15:51.047 fused_ordering(809) 00:15:51.047 fused_ordering(810) 00:15:51.047 fused_ordering(811) 00:15:51.047 fused_ordering(812) 00:15:51.047 fused_ordering(813) 00:15:51.047 fused_ordering(814) 00:15:51.047 fused_ordering(815) 00:15:51.047 fused_ordering(816) 00:15:51.047 fused_ordering(817) 00:15:51.047 fused_ordering(818) 00:15:51.047 fused_ordering(819) 00:15:51.047 fused_ordering(820) 00:15:51.306 fused_ordering(821) 00:15:51.306 fused_ordering(822) 00:15:51.306 fused_ordering(823) 00:15:51.306 fused_ordering(824) 00:15:51.306 fused_ordering(825) 00:15:51.306 fused_ordering(826) 00:15:51.306 fused_ordering(827) 00:15:51.306 fused_ordering(828) 00:15:51.306 fused_ordering(829) 00:15:51.306 fused_ordering(830) 00:15:51.306 fused_ordering(831) 00:15:51.306 fused_ordering(832) 00:15:51.306 fused_ordering(833) 00:15:51.306 fused_ordering(834) 00:15:51.306 fused_ordering(835) 00:15:51.306 fused_ordering(836) 00:15:51.306 fused_ordering(837) 00:15:51.306 fused_ordering(838) 00:15:51.306 fused_ordering(839) 00:15:51.306 fused_ordering(840) 00:15:51.306 fused_ordering(841) 00:15:51.306 fused_ordering(842) 00:15:51.306 fused_ordering(843) 00:15:51.306 fused_ordering(844) 00:15:51.306 fused_ordering(845) 00:15:51.306 fused_ordering(846) 00:15:51.306 fused_ordering(847) 00:15:51.306 fused_ordering(848) 00:15:51.306 fused_ordering(849) 00:15:51.306 fused_ordering(850) 00:15:51.306 fused_ordering(851) 00:15:51.306 fused_ordering(852) 00:15:51.306 fused_ordering(853) 00:15:51.306 fused_ordering(854) 00:15:51.306 fused_ordering(855) 00:15:51.306 fused_ordering(856) 00:15:51.306 fused_ordering(857) 00:15:51.306 fused_ordering(858) 00:15:51.306 fused_ordering(859) 00:15:51.306 fused_ordering(860) 00:15:51.306 fused_ordering(861) 00:15:51.306 fused_ordering(862) 00:15:51.306 fused_ordering(863) 00:15:51.306 fused_ordering(864) 00:15:51.306 fused_ordering(865) 00:15:51.306 fused_ordering(866) 00:15:51.306 fused_ordering(867) 00:15:51.306 fused_ordering(868) 00:15:51.306 fused_ordering(869) 00:15:51.306 fused_ordering(870) 00:15:51.306 fused_ordering(871) 00:15:51.306 fused_ordering(872) 00:15:51.306 fused_ordering(873) 00:15:51.306 fused_ordering(874) 00:15:51.306 fused_ordering(875) 00:15:51.306 fused_ordering(876) 00:15:51.306 fused_ordering(877) 00:15:51.306 fused_ordering(878) 00:15:51.306 fused_ordering(879) 00:15:51.306 fused_ordering(880) 00:15:51.306 fused_ordering(881) 00:15:51.306 fused_ordering(882) 00:15:51.306 fused_ordering(883) 00:15:51.306 fused_ordering(884) 00:15:51.306 fused_ordering(885) 00:15:51.306 fused_ordering(886) 00:15:51.306 fused_ordering(887) 00:15:51.306 fused_ordering(888) 00:15:51.306 fused_ordering(889) 00:15:51.306 fused_ordering(890) 00:15:51.306 fused_ordering(891) 00:15:51.306 fused_ordering(892) 00:15:51.306 fused_ordering(893) 00:15:51.306 fused_ordering(894) 00:15:51.306 fused_ordering(895) 00:15:51.306 fused_ordering(896) 00:15:51.306 fused_ordering(897) 00:15:51.306 fused_ordering(898) 00:15:51.306 fused_ordering(899) 00:15:51.306 fused_ordering(900) 00:15:51.306 fused_ordering(901) 00:15:51.306 fused_ordering(902) 00:15:51.306 fused_ordering(903) 00:15:51.306 fused_ordering(904) 00:15:51.306 fused_ordering(905) 00:15:51.306 fused_ordering(906) 00:15:51.306 fused_ordering(907) 00:15:51.306 fused_ordering(908) 00:15:51.306 fused_ordering(909) 00:15:51.306 fused_ordering(910) 00:15:51.306 fused_ordering(911) 00:15:51.306 fused_ordering(912) 00:15:51.306 fused_ordering(913) 00:15:51.306 fused_ordering(914) 00:15:51.306 fused_ordering(915) 00:15:51.306 fused_ordering(916) 00:15:51.306 fused_ordering(917) 00:15:51.306 fused_ordering(918) 00:15:51.306 fused_ordering(919) 00:15:51.306 fused_ordering(920) 00:15:51.306 fused_ordering(921) 00:15:51.306 fused_ordering(922) 00:15:51.306 fused_ordering(923) 00:15:51.306 fused_ordering(924) 00:15:51.306 fused_ordering(925) 00:15:51.306 fused_ordering(926) 00:15:51.306 fused_ordering(927) 00:15:51.306 fused_ordering(928) 00:15:51.306 fused_ordering(929) 00:15:51.306 fused_ordering(930) 00:15:51.306 fused_ordering(931) 00:15:51.306 fused_ordering(932) 00:15:51.306 fused_ordering(933) 00:15:51.306 fused_ordering(934) 00:15:51.306 fused_ordering(935) 00:15:51.306 fused_ordering(936) 00:15:51.306 fused_ordering(937) 00:15:51.306 fused_ordering(938) 00:15:51.306 fused_ordering(939) 00:15:51.306 fused_ordering(940) 00:15:51.306 fused_ordering(941) 00:15:51.306 fused_ordering(942) 00:15:51.306 fused_ordering(943) 00:15:51.306 fused_ordering(944) 00:15:51.306 fused_ordering(945) 00:15:51.306 fused_ordering(946) 00:15:51.306 fused_ordering(947) 00:15:51.306 fused_ordering(948) 00:15:51.306 fused_ordering(949) 00:15:51.306 fused_ordering(950) 00:15:51.306 fused_ordering(951) 00:15:51.306 fused_ordering(952) 00:15:51.306 fused_ordering(953) 00:15:51.306 fused_ordering(954) 00:15:51.306 fused_ordering(955) 00:15:51.306 fused_ordering(956) 00:15:51.306 fused_ordering(957) 00:15:51.306 fused_ordering(958) 00:15:51.306 fused_ordering(959) 00:15:51.306 fused_ordering(960) 00:15:51.306 fused_ordering(961) 00:15:51.306 fused_ordering(962) 00:15:51.306 fused_ordering(963) 00:15:51.306 fused_ordering(964) 00:15:51.306 fused_ordering(965) 00:15:51.306 fused_ordering(966) 00:15:51.306 fused_ordering(967) 00:15:51.306 fused_ordering(968) 00:15:51.306 fused_ordering(969) 00:15:51.306 fused_ordering(970) 00:15:51.306 fused_ordering(971) 00:15:51.306 fused_ordering(972) 00:15:51.306 fused_ordering(973) 00:15:51.306 fused_ordering(974) 00:15:51.306 fused_ordering(975) 00:15:51.306 fused_ordering(976) 00:15:51.306 fused_ordering(977) 00:15:51.306 fused_ordering(978) 00:15:51.306 fused_ordering(979) 00:15:51.306 fused_ordering(980) 00:15:51.306 fused_ordering(981) 00:15:51.306 fused_ordering(982) 00:15:51.306 fused_ordering(983) 00:15:51.306 fused_ordering(984) 00:15:51.306 fused_ordering(985) 00:15:51.306 fused_ordering(986) 00:15:51.306 fused_ordering(987) 00:15:51.306 fused_ordering(988) 00:15:51.306 fused_ordering(989) 00:15:51.306 fused_ordering(990) 00:15:51.306 fused_ordering(991) 00:15:51.306 fused_ordering(992) 00:15:51.306 fused_ordering(993) 00:15:51.306 fused_ordering(994) 00:15:51.306 fused_ordering(995) 00:15:51.306 fused_ordering(996) 00:15:51.306 fused_ordering(997) 00:15:51.306 fused_ordering(998) 00:15:51.306 fused_ordering(999) 00:15:51.307 fused_ordering(1000) 00:15:51.307 fused_ordering(1001) 00:15:51.307 fused_ordering(1002) 00:15:51.307 fused_ordering(1003) 00:15:51.307 fused_ordering(1004) 00:15:51.307 fused_ordering(1005) 00:15:51.307 fused_ordering(1006) 00:15:51.307 fused_ordering(1007) 00:15:51.307 fused_ordering(1008) 00:15:51.307 fused_ordering(1009) 00:15:51.307 fused_ordering(1010) 00:15:51.307 fused_ordering(1011) 00:15:51.307 fused_ordering(1012) 00:15:51.307 fused_ordering(1013) 00:15:51.307 fused_ordering(1014) 00:15:51.307 fused_ordering(1015) 00:15:51.307 fused_ordering(1016) 00:15:51.307 fused_ordering(1017) 00:15:51.307 fused_ordering(1018) 00:15:51.307 fused_ordering(1019) 00:15:51.307 fused_ordering(1020) 00:15:51.307 fused_ordering(1021) 00:15:51.307 fused_ordering(1022) 00:15:51.307 fused_ordering(1023) 00:15:51.565 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:51.565 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:51.565 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.565 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:51.565 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.565 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:51.565 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.565 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.565 rmmod nvme_tcp 00:15:51.565 rmmod nvme_fabrics 00:15:51.565 rmmod nvme_keyring 00:15:51.565 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3880126 ']' 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3880126 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3880126 ']' 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3880126 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3880126 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3880126' 00:15:51.566 killing process with pid 3880126 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3880126 00:15:51.566 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3880126 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.824 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.726 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:53.726 00:15:53.726 real 0m10.659s 00:15:53.726 user 0m4.944s 00:15:53.726 sys 0m5.807s 00:15:53.726 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.726 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.726 ************************************ 00:15:53.726 END TEST nvmf_fused_ordering 00:15:53.726 ************************************ 00:15:53.726 10:43:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:53.726 10:43:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.726 10:43:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.726 10:43:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.985 ************************************ 00:15:53.985 START TEST nvmf_ns_masking 00:15:53.985 ************************************ 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:53.985 * Looking for test storage... 00:15:53.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.985 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:53.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.986 --rc genhtml_branch_coverage=1 00:15:53.986 --rc genhtml_function_coverage=1 00:15:53.986 --rc genhtml_legend=1 00:15:53.986 --rc geninfo_all_blocks=1 00:15:53.986 --rc geninfo_unexecuted_blocks=1 00:15:53.986 00:15:53.986 ' 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:53.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.986 --rc genhtml_branch_coverage=1 00:15:53.986 --rc genhtml_function_coverage=1 00:15:53.986 --rc genhtml_legend=1 00:15:53.986 --rc geninfo_all_blocks=1 00:15:53.986 --rc geninfo_unexecuted_blocks=1 00:15:53.986 00:15:53.986 ' 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:53.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.986 --rc genhtml_branch_coverage=1 00:15:53.986 --rc genhtml_function_coverage=1 00:15:53.986 --rc genhtml_legend=1 00:15:53.986 --rc geninfo_all_blocks=1 00:15:53.986 --rc geninfo_unexecuted_blocks=1 00:15:53.986 00:15:53.986 ' 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:53.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.986 --rc genhtml_branch_coverage=1 00:15:53.986 --rc genhtml_function_coverage=1 00:15:53.986 --rc genhtml_legend=1 00:15:53.986 --rc geninfo_all_blocks=1 00:15:53.986 --rc geninfo_unexecuted_blocks=1 00:15:53.986 00:15:53.986 ' 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=36f80374-9b96-44a6-accb-46ae8138985f 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6e25efaa-9979-435e-a4d3-dc409b5a08cc 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2e04b6c9-052d-4ed6-8a5e-c370facacfbd 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:53.986 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:53.987 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.987 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:53.987 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:53.987 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:53.987 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.987 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.987 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.987 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:53.987 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:53.987 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:53.987 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:00.551 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:00.551 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:00.551 Found net devices under 0000:86:00.0: cvl_0_0 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:00.551 Found net devices under 0000:86:00.1: cvl_0_1 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:00.551 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:00.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:16:00.552 00:16:00.552 --- 10.0.0.2 ping statistics --- 00:16:00.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.552 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:00.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:16:00.552 00:16:00.552 --- 10.0.0.1 ping statistics --- 00:16:00.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.552 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3883936 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3883936 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3883936 ']' 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.552 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:00.552 [2024-11-19 10:43:49.797992] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:16:00.552 [2024-11-19 10:43:49.798035] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.552 [2024-11-19 10:43:49.876169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.552 [2024-11-19 10:43:49.917270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.552 [2024-11-19 10:43:49.917308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.552 [2024-11-19 10:43:49.917316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.552 [2024-11-19 10:43:49.917322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.552 [2024-11-19 10:43:49.917327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.552 [2024-11-19 10:43:49.917928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.552 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.552 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:00.552 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:00.552 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:00.552 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:00.552 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.552 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:00.552 [2024-11-19 10:43:50.233800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.552 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:00.552 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:00.552 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:00.811 Malloc1 00:16:00.811 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:01.070 Malloc2 00:16:01.070 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:01.070 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:01.329 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.588 [2024-11-19 10:43:51.193537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.588 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:01.588 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2e04b6c9-052d-4ed6-8a5e-c370facacfbd -a 10.0.0.2 -s 4420 -i 4 00:16:01.588 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:01.588 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:01.588 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.588 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:01.588 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:04.119 [ 0]:0x1 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a2cd2d116b69494084799be21d3d5084 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a2cd2d116b69494084799be21d3d5084 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:04.119 [ 0]:0x1 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a2cd2d116b69494084799be21d3d5084 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a2cd2d116b69494084799be21d3d5084 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:04.119 [ 1]:0x2 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4727cfebd20445649d2e98dd78839b3a 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4727cfebd20445649d2e98dd78839b3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:04.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.119 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.377 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:04.635 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:04.635 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2e04b6c9-052d-4ed6-8a5e-c370facacfbd -a 10.0.0.2 -s 4420 -i 4 00:16:04.635 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:04.635 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:04.635 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.635 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:16:04.635 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:16:04.635 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:07.165 [ 0]:0x2 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4727cfebd20445649d2e98dd78839b3a 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4727cfebd20445649d2e98dd78839b3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:07.165 [ 0]:0x1 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a2cd2d116b69494084799be21d3d5084 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a2cd2d116b69494084799be21d3d5084 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:07.165 [ 1]:0x2 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4727cfebd20445649d2e98dd78839b3a 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4727cfebd20445649d2e98dd78839b3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.165 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:07.423 [ 0]:0x2 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4727cfebd20445649d2e98dd78839b3a 00:16:07.423 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4727cfebd20445649d2e98dd78839b3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.424 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:07.424 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.682 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:07.682 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:07.682 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2e04b6c9-052d-4ed6-8a5e-c370facacfbd -a 10.0.0.2 -s 4420 -i 4 00:16:07.940 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:07.940 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:07.940 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.940 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:07.940 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:07.940 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:09.842 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:09.842 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:09.842 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:09.842 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:09.842 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:09.842 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:09.842 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:09.842 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:10.100 [ 0]:0x1 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a2cd2d116b69494084799be21d3d5084 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a2cd2d116b69494084799be21d3d5084 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.100 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:10.358 [ 1]:0x2 00:16:10.358 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:10.358 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:10.358 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4727cfebd20445649d2e98dd78839b3a 00:16:10.358 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4727cfebd20445649d2e98dd78839b3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.358 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:10.616 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:10.617 [ 0]:0x2 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4727cfebd20445649d2e98dd78839b3a 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4727cfebd20445649d2e98dd78839b3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:10.617 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:10.875 [2024-11-19 10:44:00.452178] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:10.875 request: 00:16:10.875 { 00:16:10.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.875 "nsid": 2, 00:16:10.875 "host": "nqn.2016-06.io.spdk:host1", 00:16:10.875 "method": "nvmf_ns_remove_host", 00:16:10.875 "req_id": 1 00:16:10.875 } 00:16:10.875 Got JSON-RPC error response 00:16:10.875 response: 00:16:10.875 { 00:16:10.875 "code": -32602, 00:16:10.875 "message": "Invalid parameters" 00:16:10.875 } 00:16:10.875 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:10.876 [ 0]:0x2 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4727cfebd20445649d2e98dd78839b3a 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4727cfebd20445649d2e98dd78839b3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:10.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3885909 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3885909 /var/tmp/host.sock 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3885909 ']' 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:10.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.876 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:11.134 [2024-11-19 10:44:00.692709] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:16:11.134 [2024-11-19 10:44:00.692753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885909 ] 00:16:11.134 [2024-11-19 10:44:00.768906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.134 [2024-11-19 10:44:00.810615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.393 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.393 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:11.393 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.650 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:11.650 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 36f80374-9b96-44a6-accb-46ae8138985f 00:16:11.650 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:11.650 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 36F803749B9644A6ACCB46AE8138985F -i 00:16:11.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6e25efaa-9979-435e-a4d3-dc409b5a08cc 00:16:11.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:11.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6E25EFAA9979435EA4D3DC409B5A08CC -i 00:16:12.169 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:12.427 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:12.427 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:12.427 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:12.684 nvme0n1 00:16:12.684 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:12.684 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:13.250 nvme1n2 00:16:13.250 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:13.250 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:13.250 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:13.250 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:13.250 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:13.507 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:13.507 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:13.507 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:13.507 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:13.765 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 36f80374-9b96-44a6-accb-46ae8138985f == \3\6\f\8\0\3\7\4\-\9\b\9\6\-\4\4\a\6\-\a\c\c\b\-\4\6\a\e\8\1\3\8\9\8\5\f ]] 00:16:13.765 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:13.765 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:13.765 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:13.765 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6e25efaa-9979-435e-a4d3-dc409b5a08cc == \6\e\2\5\e\f\a\a\-\9\9\7\9\-\4\3\5\e\-\a\4\d\3\-\d\c\4\0\9\b\5\a\0\8\c\c ]] 00:16:13.765 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.023 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 36f80374-9b96-44a6-accb-46ae8138985f 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 36F803749B9644A6ACCB46AE8138985F 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 36F803749B9644A6ACCB46AE8138985F 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:14.282 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 36F803749B9644A6ACCB46AE8138985F 00:16:14.282 [2024-11-19 10:44:04.030010] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:14.282 [2024-11-19 10:44:04.030041] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:14.282 [2024-11-19 10:44:04.030050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.282 request: 00:16:14.282 { 00:16:14.282 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.282 "namespace": { 00:16:14.282 "bdev_name": "invalid", 00:16:14.282 "nsid": 1, 00:16:14.282 "nguid": "36F803749B9644A6ACCB46AE8138985F", 00:16:14.282 "no_auto_visible": false 00:16:14.282 }, 00:16:14.282 "method": "nvmf_subsystem_add_ns", 00:16:14.282 "req_id": 1 00:16:14.282 } 00:16:14.282 Got JSON-RPC error response 00:16:14.282 response: 00:16:14.282 { 00:16:14.282 "code": -32602, 00:16:14.282 "message": "Invalid parameters" 00:16:14.282 } 00:16:14.282 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:14.282 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:14.282 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:14.282 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:14.282 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 36f80374-9b96-44a6-accb-46ae8138985f 00:16:14.282 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:14.282 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 36F803749B9644A6ACCB46AE8138985F -i 00:16:14.541 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3885909 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3885909 ']' 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3885909 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3885909 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3885909' 00:16:17.067 killing process with pid 3885909 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3885909 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3885909 00:16:17.067 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.325 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:17.325 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:17.325 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:17.326 rmmod nvme_tcp 00:16:17.326 rmmod nvme_fabrics 00:16:17.326 rmmod nvme_keyring 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3883936 ']' 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3883936 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3883936 ']' 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3883936 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.326 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3883936 00:16:17.584 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:17.584 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3883936' 00:16:17.585 killing process with pid 3883936 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3883936 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3883936 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.585 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.154 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:20.154 00:16:20.154 real 0m25.893s 00:16:20.154 user 0m30.799s 00:16:20.154 sys 0m7.054s 00:16:20.154 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.154 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:20.154 ************************************ 00:16:20.154 END TEST nvmf_ns_masking 00:16:20.154 ************************************ 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.155 ************************************ 00:16:20.155 START TEST nvmf_nvme_cli 00:16:20.155 ************************************ 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:20.155 * Looking for test storage... 00:16:20.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:20.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.155 --rc genhtml_branch_coverage=1 00:16:20.155 --rc genhtml_function_coverage=1 00:16:20.155 --rc genhtml_legend=1 00:16:20.155 --rc geninfo_all_blocks=1 00:16:20.155 --rc geninfo_unexecuted_blocks=1 00:16:20.155 00:16:20.155 ' 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:20.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.155 --rc genhtml_branch_coverage=1 00:16:20.155 --rc genhtml_function_coverage=1 00:16:20.155 --rc genhtml_legend=1 00:16:20.155 --rc geninfo_all_blocks=1 00:16:20.155 --rc geninfo_unexecuted_blocks=1 00:16:20.155 00:16:20.155 ' 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:20.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.155 --rc genhtml_branch_coverage=1 00:16:20.155 --rc genhtml_function_coverage=1 00:16:20.155 --rc genhtml_legend=1 00:16:20.155 --rc geninfo_all_blocks=1 00:16:20.155 --rc geninfo_unexecuted_blocks=1 00:16:20.155 00:16:20.155 ' 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:20.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.155 --rc genhtml_branch_coverage=1 00:16:20.155 --rc genhtml_function_coverage=1 00:16:20.155 --rc genhtml_legend=1 00:16:20.155 --rc geninfo_all_blocks=1 00:16:20.155 --rc geninfo_unexecuted_blocks=1 00:16:20.155 00:16:20.155 ' 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:20.155 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:20.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:20.156 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:26.725 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:26.726 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:26.726 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:26.726 Found net devices under 0000:86:00.0: cvl_0_0 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:26.726 Found net devices under 0000:86:00.1: cvl_0_1 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:26.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:16:26.726 00:16:26.726 --- 10.0.0.2 ping statistics --- 00:16:26.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.726 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:26.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:16:26.726 00:16:26.726 --- 10.0.0.1 ping statistics --- 00:16:26.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.726 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3890618 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3890618 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3890618 ']' 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.726 [2024-11-19 10:44:15.750148] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:16:26.726 [2024-11-19 10:44:15.750195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.726 [2024-11-19 10:44:15.827989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.726 [2024-11-19 10:44:15.871627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.726 [2024-11-19 10:44:15.871663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.726 [2024-11-19 10:44:15.871670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.726 [2024-11-19 10:44:15.871676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.726 [2024-11-19 10:44:15.871681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.726 [2024-11-19 10:44:15.873291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.726 [2024-11-19 10:44:15.873409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.726 [2024-11-19 10:44:15.873518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.726 [2024-11-19 10:44:15.873519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:26.726 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.726 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.726 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.726 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.726 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.726 [2024-11-19 10:44:16.010443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.726 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.726 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:26.726 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.726 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.726 Malloc0 00:16:26.726 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.727 Malloc1 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.727 [2024-11-19 10:44:16.116119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:26.727 00:16:26.727 Discovery Log Number of Records 2, Generation counter 2 00:16:26.727 =====Discovery Log Entry 0====== 00:16:26.727 trtype: tcp 00:16:26.727 adrfam: ipv4 00:16:26.727 subtype: current discovery subsystem 00:16:26.727 treq: not required 00:16:26.727 portid: 0 00:16:26.727 trsvcid: 4420 00:16:26.727 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:26.727 traddr: 10.0.0.2 00:16:26.727 eflags: explicit discovery connections, duplicate discovery information 00:16:26.727 sectype: none 00:16:26.727 =====Discovery Log Entry 1====== 00:16:26.727 trtype: tcp 00:16:26.727 adrfam: ipv4 00:16:26.727 subtype: nvme subsystem 00:16:26.727 treq: not required 00:16:26.727 portid: 0 00:16:26.727 trsvcid: 4420 00:16:26.727 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:26.727 traddr: 10.0.0.2 00:16:26.727 eflags: none 00:16:26.727 sectype: none 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:26.727 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:27.657 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:27.657 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:27.657 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.657 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:27.657 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:27.657 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:30.180 /dev/nvme0n2 ]] 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:30.180 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:30.181 rmmod nvme_tcp 00:16:30.181 rmmod nvme_fabrics 00:16:30.181 rmmod nvme_keyring 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3890618 ']' 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3890618 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3890618 ']' 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3890618 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3890618 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3890618' 00:16:30.181 killing process with pid 3890618 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3890618 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3890618 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:30.181 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:30.440 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:30.440 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:30.440 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:30.440 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:30.440 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:30.440 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.440 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.440 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.343 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:32.343 00:16:32.343 real 0m12.554s 00:16:32.343 user 0m17.939s 00:16:32.343 sys 0m5.227s 00:16:32.343 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.343 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.343 ************************************ 00:16:32.343 END TEST nvmf_nvme_cli 00:16:32.343 ************************************ 00:16:32.343 10:44:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:32.343 10:44:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:32.343 10:44:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:32.343 10:44:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.343 10:44:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:32.343 ************************************ 00:16:32.343 START TEST nvmf_vfio_user 00:16:32.343 ************************************ 00:16:32.343 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:32.603 * Looking for test storage... 00:16:32.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:32.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.603 --rc genhtml_branch_coverage=1 00:16:32.603 --rc genhtml_function_coverage=1 00:16:32.603 --rc genhtml_legend=1 00:16:32.603 --rc geninfo_all_blocks=1 00:16:32.603 --rc geninfo_unexecuted_blocks=1 00:16:32.603 00:16:32.603 ' 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:32.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.603 --rc genhtml_branch_coverage=1 00:16:32.603 --rc genhtml_function_coverage=1 00:16:32.603 --rc genhtml_legend=1 00:16:32.603 --rc geninfo_all_blocks=1 00:16:32.603 --rc geninfo_unexecuted_blocks=1 00:16:32.603 00:16:32.603 ' 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:32.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.603 --rc genhtml_branch_coverage=1 00:16:32.603 --rc genhtml_function_coverage=1 00:16:32.603 --rc genhtml_legend=1 00:16:32.603 --rc geninfo_all_blocks=1 00:16:32.603 --rc geninfo_unexecuted_blocks=1 00:16:32.603 00:16:32.603 ' 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:32.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.603 --rc genhtml_branch_coverage=1 00:16:32.603 --rc genhtml_function_coverage=1 00:16:32.603 --rc genhtml_legend=1 00:16:32.603 --rc geninfo_all_blocks=1 00:16:32.603 --rc geninfo_unexecuted_blocks=1 00:16:32.603 00:16:32.603 ' 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.603 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:32.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3891709 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3891709' 00:16:32.604 Process pid: 3891709 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3891709 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3891709 ']' 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.604 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:32.604 [2024-11-19 10:44:22.381168] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:16:32.604 [2024-11-19 10:44:22.381218] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.862 [2024-11-19 10:44:22.454058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.862 [2024-11-19 10:44:22.496343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.862 [2024-11-19 10:44:22.496379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.862 [2024-11-19 10:44:22.496386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.862 [2024-11-19 10:44:22.496392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.862 [2024-11-19 10:44:22.496397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.862 [2024-11-19 10:44:22.497997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.862 [2024-11-19 10:44:22.498112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.862 [2024-11-19 10:44:22.498236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.862 [2024-11-19 10:44:22.498237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.862 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.862 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:32.862 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:34.232 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:34.232 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:34.232 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:34.232 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:34.232 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:34.232 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:34.489 Malloc1 00:16:34.489 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:34.489 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:34.746 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:35.002 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:35.002 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:35.002 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:35.259 Malloc2 00:16:35.259 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:35.259 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:35.517 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:35.776 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:35.776 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:35.776 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:35.776 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:35.776 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:35.776 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:35.776 [2024-11-19 10:44:25.474106] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:16:35.776 [2024-11-19 10:44:25.474144] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892355 ] 00:16:35.776 [2024-11-19 10:44:25.511157] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:35.776 [2024-11-19 10:44:25.517566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:35.776 [2024-11-19 10:44:25.517588] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff45c6d6000 00:16:35.776 [2024-11-19 10:44:25.518565] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.776 [2024-11-19 10:44:25.519569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.776 [2024-11-19 10:44:25.520572] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.776 [2024-11-19 10:44:25.521575] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:35.776 [2024-11-19 10:44:25.522583] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:35.776 [2024-11-19 10:44:25.523589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.776 [2024-11-19 10:44:25.524597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:35.776 [2024-11-19 10:44:25.525615] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.776 [2024-11-19 10:44:25.526616] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:35.776 [2024-11-19 10:44:25.526625] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff45c6cb000 00:16:35.776 [2024-11-19 10:44:25.527540] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:35.776 [2024-11-19 10:44:25.540468] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:35.776 [2024-11-19 10:44:25.540493] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:35.776 [2024-11-19 10:44:25.545739] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:35.776 [2024-11-19 10:44:25.545773] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:35.776 [2024-11-19 10:44:25.545837] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:35.776 [2024-11-19 10:44:25.545850] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:35.777 [2024-11-19 10:44:25.545855] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:35.777 [2024-11-19 10:44:25.546735] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:35.777 [2024-11-19 10:44:25.546743] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:35.777 [2024-11-19 10:44:25.546750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:35.777 [2024-11-19 10:44:25.547739] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:35.777 [2024-11-19 10:44:25.547747] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:35.777 [2024-11-19 10:44:25.547753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:35.777 [2024-11-19 10:44:25.548744] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:35.777 [2024-11-19 10:44:25.548752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:35.777 [2024-11-19 10:44:25.549752] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:35.777 [2024-11-19 10:44:25.549759] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:35.777 [2024-11-19 10:44:25.549763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:35.777 [2024-11-19 10:44:25.549769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:35.777 [2024-11-19 10:44:25.549878] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:35.777 [2024-11-19 10:44:25.549882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:35.777 [2024-11-19 10:44:25.549886] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:35.777 [2024-11-19 10:44:25.550764] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:35.777 [2024-11-19 10:44:25.551767] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:35.777 [2024-11-19 10:44:25.552774] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:35.777 [2024-11-19 10:44:25.553770] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:35.777 [2024-11-19 10:44:25.553830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:35.777 [2024-11-19 10:44:25.554784] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:35.777 [2024-11-19 10:44:25.554791] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:35.777 [2024-11-19 10:44:25.554796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.554812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:35.777 [2024-11-19 10:44:25.554818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.554832] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.777 [2024-11-19 10:44:25.554837] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.777 [2024-11-19 10:44:25.554840] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.777 [2024-11-19 10:44:25.554852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.777 [2024-11-19 10:44:25.554895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:35.777 [2024-11-19 10:44:25.554903] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:35.777 [2024-11-19 10:44:25.554907] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:35.777 [2024-11-19 10:44:25.554911] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:35.777 [2024-11-19 10:44:25.554915] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:35.777 [2024-11-19 10:44:25.554921] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:35.777 [2024-11-19 10:44:25.554925] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:35.777 [2024-11-19 10:44:25.554929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.554937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.554947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:35.777 [2024-11-19 10:44:25.554959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:35.777 [2024-11-19 10:44:25.554969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.777 [2024-11-19 10:44:25.554976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.777 [2024-11-19 10:44:25.554983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.777 [2024-11-19 10:44:25.554991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.777 [2024-11-19 10:44:25.554995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.555001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.555008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:35.777 [2024-11-19 10:44:25.555019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:35.777 [2024-11-19 10:44:25.555026] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:35.777 [2024-11-19 10:44:25.555030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.555036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.555041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.555048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:35.777 [2024-11-19 10:44:25.555056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:35.777 [2024-11-19 10:44:25.555105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.555112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.555118] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:35.777 [2024-11-19 10:44:25.555122] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:35.777 [2024-11-19 10:44:25.555124] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.777 [2024-11-19 10:44:25.555130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:35.777 [2024-11-19 10:44:25.555140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:35.777 [2024-11-19 10:44:25.555147] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:35.777 [2024-11-19 10:44:25.555158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.555166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.555172] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.777 [2024-11-19 10:44:25.555175] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.777 [2024-11-19 10:44:25.555178] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.777 [2024-11-19 10:44:25.555183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.777 [2024-11-19 10:44:25.555205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:35.777 [2024-11-19 10:44:25.555216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.555223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:35.777 [2024-11-19 10:44:25.555229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.777 [2024-11-19 10:44:25.555233] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.777 [2024-11-19 10:44:25.555235] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.777 [2024-11-19 10:44:25.555241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.777 [2024-11-19 10:44:25.555255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:35.777 [2024-11-19 10:44:25.555262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:35.778 [2024-11-19 10:44:25.555268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:35.778 [2024-11-19 10:44:25.555274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:35.778 [2024-11-19 10:44:25.555279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:35.778 [2024-11-19 10:44:25.555284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:35.778 [2024-11-19 10:44:25.555288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:35.778 [2024-11-19 10:44:25.555292] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:35.778 [2024-11-19 10:44:25.555297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:35.778 [2024-11-19 10:44:25.555301] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:35.778 [2024-11-19 10:44:25.555316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:35.778 [2024-11-19 10:44:25.555327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:35.778 [2024-11-19 10:44:25.555337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:35.778 [2024-11-19 10:44:25.555344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:35.778 [2024-11-19 10:44:25.555353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:35.778 [2024-11-19 10:44:25.555365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:35.778 [2024-11-19 10:44:25.555375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:35.778 [2024-11-19 10:44:25.555385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:35.778 [2024-11-19 10:44:25.555396] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:35.778 [2024-11-19 10:44:25.555400] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:35.778 [2024-11-19 10:44:25.555403] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:35.778 [2024-11-19 10:44:25.555406] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:35.778 [2024-11-19 10:44:25.555409] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:35.778 [2024-11-19 10:44:25.555415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:35.778 [2024-11-19 10:44:25.555421] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:35.778 [2024-11-19 10:44:25.555424] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:35.778 [2024-11-19 10:44:25.555427] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.778 [2024-11-19 10:44:25.555433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:35.778 [2024-11-19 10:44:25.555438] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:35.778 [2024-11-19 10:44:25.555442] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.778 [2024-11-19 10:44:25.555445] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.778 [2024-11-19 10:44:25.555450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.778 [2024-11-19 10:44:25.555457] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:35.778 [2024-11-19 10:44:25.555461] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:35.778 [2024-11-19 10:44:25.555463] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.778 [2024-11-19 10:44:25.555469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:35.778 [2024-11-19 10:44:25.555474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:35.778 [2024-11-19 10:44:25.555485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:35.778 [2024-11-19 10:44:25.555494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:35.778 [2024-11-19 10:44:25.555500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:35.778 ===================================================== 00:16:35.778 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:35.778 ===================================================== 00:16:35.778 Controller Capabilities/Features 00:16:35.778 ================================ 00:16:35.778 Vendor ID: 4e58 00:16:35.778 Subsystem Vendor ID: 4e58 00:16:35.778 Serial Number: SPDK1 00:16:35.778 Model Number: SPDK bdev Controller 00:16:35.778 Firmware Version: 25.01 00:16:35.778 Recommended Arb Burst: 6 00:16:35.778 IEEE OUI Identifier: 8d 6b 50 00:16:35.778 Multi-path I/O 00:16:35.778 May have multiple subsystem ports: Yes 00:16:35.778 May have multiple controllers: Yes 00:16:35.778 Associated with SR-IOV VF: No 00:16:35.778 Max Data Transfer Size: 131072 00:16:35.778 Max Number of Namespaces: 32 00:16:35.778 Max Number of I/O Queues: 127 00:16:35.778 NVMe Specification Version (VS): 1.3 00:16:35.778 NVMe Specification Version (Identify): 1.3 00:16:35.778 Maximum Queue Entries: 256 00:16:35.778 Contiguous Queues Required: Yes 00:16:35.778 Arbitration Mechanisms Supported 00:16:35.778 Weighted Round Robin: Not Supported 00:16:35.778 Vendor Specific: Not Supported 00:16:35.778 Reset Timeout: 15000 ms 00:16:35.778 Doorbell Stride: 4 bytes 00:16:35.778 NVM Subsystem Reset: Not Supported 00:16:35.778 Command Sets Supported 00:16:35.778 NVM Command Set: Supported 00:16:35.778 Boot Partition: Not Supported 00:16:35.778 Memory Page Size Minimum: 4096 bytes 00:16:35.778 Memory Page Size Maximum: 4096 bytes 00:16:35.778 Persistent Memory Region: Not Supported 00:16:35.778 Optional Asynchronous Events Supported 00:16:35.778 Namespace Attribute Notices: Supported 00:16:35.778 Firmware Activation Notices: Not Supported 00:16:35.778 ANA Change Notices: Not Supported 00:16:35.778 PLE Aggregate Log Change Notices: Not Supported 00:16:35.778 LBA Status Info Alert Notices: Not Supported 00:16:35.778 EGE Aggregate Log Change Notices: Not Supported 00:16:35.778 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.778 Zone Descriptor Change Notices: Not Supported 00:16:35.778 Discovery Log Change Notices: Not Supported 00:16:35.778 Controller Attributes 00:16:35.778 128-bit Host Identifier: Supported 00:16:35.778 Non-Operational Permissive Mode: Not Supported 00:16:35.778 NVM Sets: Not Supported 00:16:35.778 Read Recovery Levels: Not Supported 00:16:35.778 Endurance Groups: Not Supported 00:16:35.778 Predictable Latency Mode: Not Supported 00:16:35.778 Traffic Based Keep ALive: Not Supported 00:16:35.778 Namespace Granularity: Not Supported 00:16:35.778 SQ Associations: Not Supported 00:16:35.778 UUID List: Not Supported 00:16:35.778 Multi-Domain Subsystem: Not Supported 00:16:35.778 Fixed Capacity Management: Not Supported 00:16:35.778 Variable Capacity Management: Not Supported 00:16:35.778 Delete Endurance Group: Not Supported 00:16:35.778 Delete NVM Set: Not Supported 00:16:35.778 Extended LBA Formats Supported: Not Supported 00:16:35.778 Flexible Data Placement Supported: Not Supported 00:16:35.778 00:16:35.778 Controller Memory Buffer Support 00:16:35.778 ================================ 00:16:35.778 Supported: No 00:16:35.778 00:16:35.778 Persistent Memory Region Support 00:16:35.778 ================================ 00:16:35.778 Supported: No 00:16:35.778 00:16:35.778 Admin Command Set Attributes 00:16:35.778 ============================ 00:16:35.778 Security Send/Receive: Not Supported 00:16:35.778 Format NVM: Not Supported 00:16:35.778 Firmware Activate/Download: Not Supported 00:16:35.778 Namespace Management: Not Supported 00:16:35.778 Device Self-Test: Not Supported 00:16:35.778 Directives: Not Supported 00:16:35.778 NVMe-MI: Not Supported 00:16:35.778 Virtualization Management: Not Supported 00:16:35.778 Doorbell Buffer Config: Not Supported 00:16:35.778 Get LBA Status Capability: Not Supported 00:16:35.778 Command & Feature Lockdown Capability: Not Supported 00:16:35.778 Abort Command Limit: 4 00:16:35.778 Async Event Request Limit: 4 00:16:35.778 Number of Firmware Slots: N/A 00:16:35.778 Firmware Slot 1 Read-Only: N/A 00:16:35.778 Firmware Activation Without Reset: N/A 00:16:35.778 Multiple Update Detection Support: N/A 00:16:35.778 Firmware Update Granularity: No Information Provided 00:16:35.778 Per-Namespace SMART Log: No 00:16:35.778 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.778 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:35.778 Command Effects Log Page: Supported 00:16:35.778 Get Log Page Extended Data: Supported 00:16:35.778 Telemetry Log Pages: Not Supported 00:16:35.779 Persistent Event Log Pages: Not Supported 00:16:35.779 Supported Log Pages Log Page: May Support 00:16:35.779 Commands Supported & Effects Log Page: Not Supported 00:16:35.779 Feature Identifiers & Effects Log Page:May Support 00:16:35.779 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.779 Data Area 4 for Telemetry Log: Not Supported 00:16:35.779 Error Log Page Entries Supported: 128 00:16:35.779 Keep Alive: Supported 00:16:35.779 Keep Alive Granularity: 10000 ms 00:16:35.779 00:16:35.779 NVM Command Set Attributes 00:16:35.779 ========================== 00:16:35.779 Submission Queue Entry Size 00:16:35.779 Max: 64 00:16:35.779 Min: 64 00:16:35.779 Completion Queue Entry Size 00:16:35.779 Max: 16 00:16:35.779 Min: 16 00:16:35.779 Number of Namespaces: 32 00:16:35.779 Compare Command: Supported 00:16:35.779 Write Uncorrectable Command: Not Supported 00:16:35.779 Dataset Management Command: Supported 00:16:35.779 Write Zeroes Command: Supported 00:16:35.779 Set Features Save Field: Not Supported 00:16:35.779 Reservations: Not Supported 00:16:35.779 Timestamp: Not Supported 00:16:35.779 Copy: Supported 00:16:35.779 Volatile Write Cache: Present 00:16:35.779 Atomic Write Unit (Normal): 1 00:16:35.779 Atomic Write Unit (PFail): 1 00:16:35.779 Atomic Compare & Write Unit: 1 00:16:35.779 Fused Compare & Write: Supported 00:16:35.779 Scatter-Gather List 00:16:35.779 SGL Command Set: Supported (Dword aligned) 00:16:35.779 SGL Keyed: Not Supported 00:16:35.779 SGL Bit Bucket Descriptor: Not Supported 00:16:35.779 SGL Metadata Pointer: Not Supported 00:16:35.779 Oversized SGL: Not Supported 00:16:35.779 SGL Metadata Address: Not Supported 00:16:35.779 SGL Offset: Not Supported 00:16:35.779 Transport SGL Data Block: Not Supported 00:16:35.779 Replay Protected Memory Block: Not Supported 00:16:35.779 00:16:35.779 Firmware Slot Information 00:16:35.779 ========================= 00:16:35.779 Active slot: 1 00:16:35.779 Slot 1 Firmware Revision: 25.01 00:16:35.779 00:16:35.779 00:16:35.779 Commands Supported and Effects 00:16:35.779 ============================== 00:16:35.779 Admin Commands 00:16:35.779 -------------- 00:16:35.779 Get Log Page (02h): Supported 00:16:35.779 Identify (06h): Supported 00:16:35.779 Abort (08h): Supported 00:16:35.779 Set Features (09h): Supported 00:16:35.779 Get Features (0Ah): Supported 00:16:35.779 Asynchronous Event Request (0Ch): Supported 00:16:35.779 Keep Alive (18h): Supported 00:16:35.779 I/O Commands 00:16:35.779 ------------ 00:16:35.779 Flush (00h): Supported LBA-Change 00:16:35.779 Write (01h): Supported LBA-Change 00:16:35.779 Read (02h): Supported 00:16:35.779 Compare (05h): Supported 00:16:35.779 Write Zeroes (08h): Supported LBA-Change 00:16:35.779 Dataset Management (09h): Supported LBA-Change 00:16:35.779 Copy (19h): Supported LBA-Change 00:16:35.779 00:16:35.779 Error Log 00:16:35.779 ========= 00:16:35.779 00:16:35.779 Arbitration 00:16:35.779 =========== 00:16:35.779 Arbitration Burst: 1 00:16:35.779 00:16:35.779 Power Management 00:16:35.779 ================ 00:16:35.779 Number of Power States: 1 00:16:35.779 Current Power State: Power State #0 00:16:35.779 Power State #0: 00:16:35.779 Max Power: 0.00 W 00:16:35.779 Non-Operational State: Operational 00:16:35.779 Entry Latency: Not Reported 00:16:35.779 Exit Latency: Not Reported 00:16:35.779 Relative Read Throughput: 0 00:16:35.779 Relative Read Latency: 0 00:16:35.779 Relative Write Throughput: 0 00:16:35.779 Relative Write Latency: 0 00:16:35.779 Idle Power: Not Reported 00:16:35.779 Active Power: Not Reported 00:16:35.779 Non-Operational Permissive Mode: Not Supported 00:16:35.779 00:16:35.779 Health Information 00:16:35.779 ================== 00:16:35.779 Critical Warnings: 00:16:35.779 Available Spare Space: OK 00:16:35.779 Temperature: OK 00:16:35.779 Device Reliability: OK 00:16:35.779 Read Only: No 00:16:35.779 Volatile Memory Backup: OK 00:16:35.779 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:35.779 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:35.779 Available Spare: 0% 00:16:35.779 Available Sp[2024-11-19 10:44:25.555584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:35.779 [2024-11-19 10:44:25.555593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:35.779 [2024-11-19 10:44:25.555618] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:35.779 [2024-11-19 10:44:25.555626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.779 [2024-11-19 10:44:25.555632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.779 [2024-11-19 10:44:25.555637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.779 [2024-11-19 10:44:25.555642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.779 [2024-11-19 10:44:25.555796] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:35.779 [2024-11-19 10:44:25.555805] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:35.779 [2024-11-19 10:44:25.556805] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:35.779 [2024-11-19 10:44:25.556853] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:35.779 [2024-11-19 10:44:25.556860] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:35.779 [2024-11-19 10:44:25.557808] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:35.779 [2024-11-19 10:44:25.557817] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:35.779 [2024-11-19 10:44:25.557862] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:35.779 [2024-11-19 10:44:25.558832] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:36.037 are Threshold: 0% 00:16:36.037 Life Percentage Used: 0% 00:16:36.037 Data Units Read: 0 00:16:36.037 Data Units Written: 0 00:16:36.037 Host Read Commands: 0 00:16:36.037 Host Write Commands: 0 00:16:36.037 Controller Busy Time: 0 minutes 00:16:36.037 Power Cycles: 0 00:16:36.037 Power On Hours: 0 hours 00:16:36.037 Unsafe Shutdowns: 0 00:16:36.037 Unrecoverable Media Errors: 0 00:16:36.037 Lifetime Error Log Entries: 0 00:16:36.037 Warning Temperature Time: 0 minutes 00:16:36.037 Critical Temperature Time: 0 minutes 00:16:36.037 00:16:36.037 Number of Queues 00:16:36.037 ================ 00:16:36.037 Number of I/O Submission Queues: 127 00:16:36.037 Number of I/O Completion Queues: 127 00:16:36.037 00:16:36.037 Active Namespaces 00:16:36.037 ================= 00:16:36.037 Namespace ID:1 00:16:36.037 Error Recovery Timeout: Unlimited 00:16:36.037 Command Set Identifier: NVM (00h) 00:16:36.037 Deallocate: Supported 00:16:36.037 Deallocated/Unwritten Error: Not Supported 00:16:36.037 Deallocated Read Value: Unknown 00:16:36.037 Deallocate in Write Zeroes: Not Supported 00:16:36.037 Deallocated Guard Field: 0xFFFF 00:16:36.037 Flush: Supported 00:16:36.037 Reservation: Supported 00:16:36.037 Namespace Sharing Capabilities: Multiple Controllers 00:16:36.037 Size (in LBAs): 131072 (0GiB) 00:16:36.037 Capacity (in LBAs): 131072 (0GiB) 00:16:36.037 Utilization (in LBAs): 131072 (0GiB) 00:16:36.037 NGUID: FA562994FF2842FA83BB8EF77DEF5DC5 00:16:36.037 UUID: fa562994-ff28-42fa-83bb-8ef77def5dc5 00:16:36.037 Thin Provisioning: Not Supported 00:16:36.037 Per-NS Atomic Units: Yes 00:16:36.037 Atomic Boundary Size (Normal): 0 00:16:36.037 Atomic Boundary Size (PFail): 0 00:16:36.037 Atomic Boundary Offset: 0 00:16:36.037 Maximum Single Source Range Length: 65535 00:16:36.037 Maximum Copy Length: 65535 00:16:36.037 Maximum Source Range Count: 1 00:16:36.037 NGUID/EUI64 Never Reused: No 00:16:36.037 Namespace Write Protected: No 00:16:36.037 Number of LBA Formats: 1 00:16:36.037 Current LBA Format: LBA Format #00 00:16:36.037 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:36.037 00:16:36.037 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:36.037 [2024-11-19 10:44:25.785036] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:41.291 Initializing NVMe Controllers 00:16:41.291 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:41.291 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:41.291 Initialization complete. Launching workers. 00:16:41.291 ======================================================== 00:16:41.291 Latency(us) 00:16:41.291 Device Information : IOPS MiB/s Average min max 00:16:41.291 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39946.30 156.04 3204.12 933.84 8641.64 00:16:41.291 ======================================================== 00:16:41.291 Total : 39946.30 156.04 3204.12 933.84 8641.64 00:16:41.291 00:16:41.291 [2024-11-19 10:44:30.802023] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:41.291 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:41.291 [2024-11-19 10:44:31.036116] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:46.541 Initializing NVMe Controllers 00:16:46.541 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:46.541 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:46.541 Initialization complete. Launching workers. 00:16:46.541 ======================================================== 00:16:46.541 Latency(us) 00:16:46.541 Device Information : IOPS MiB/s Average min max 00:16:46.541 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16044.42 62.67 7988.94 5981.31 15441.12 00:16:46.541 ======================================================== 00:16:46.541 Total : 16044.42 62.67 7988.94 5981.31 15441.12 00:16:46.541 00:16:46.541 [2024-11-19 10:44:36.073194] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:46.541 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:46.541 [2024-11-19 10:44:36.284178] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:51.795 [2024-11-19 10:44:41.396686] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:51.795 Initializing NVMe Controllers 00:16:51.795 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:51.795 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:51.795 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:51.795 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:51.795 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:51.795 Initialization complete. Launching workers. 00:16:51.795 Starting thread on core 2 00:16:51.795 Starting thread on core 3 00:16:51.795 Starting thread on core 1 00:16:51.795 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:52.052 [2024-11-19 10:44:41.694575] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:55.325 [2024-11-19 10:44:44.760249] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:55.325 Initializing NVMe Controllers 00:16:55.325 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:55.325 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:55.325 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:55.325 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:55.325 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:55.325 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:55.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:55.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:55.325 Initialization complete. Launching workers. 00:16:55.325 Starting thread on core 1 with urgent priority queue 00:16:55.325 Starting thread on core 2 with urgent priority queue 00:16:55.325 Starting thread on core 3 with urgent priority queue 00:16:55.325 Starting thread on core 0 with urgent priority queue 00:16:55.325 SPDK bdev Controller (SPDK1 ) core 0: 7693.33 IO/s 13.00 secs/100000 ios 00:16:55.325 SPDK bdev Controller (SPDK1 ) core 1: 6807.33 IO/s 14.69 secs/100000 ios 00:16:55.325 SPDK bdev Controller (SPDK1 ) core 2: 8300.67 IO/s 12.05 secs/100000 ios 00:16:55.325 SPDK bdev Controller (SPDK1 ) core 3: 6338.33 IO/s 15.78 secs/100000 ios 00:16:55.325 ======================================================== 00:16:55.325 00:16:55.325 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:55.325 [2024-11-19 10:44:45.051869] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:55.325 Initializing NVMe Controllers 00:16:55.326 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:55.326 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:55.326 Namespace ID: 1 size: 0GB 00:16:55.326 Initialization complete. 00:16:55.326 INFO: using host memory buffer for IO 00:16:55.326 Hello world! 00:16:55.326 [2024-11-19 10:44:45.086082] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:55.582 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:55.582 [2024-11-19 10:44:45.370630] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:56.950 Initializing NVMe Controllers 00:16:56.950 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:56.950 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:56.950 Initialization complete. Launching workers. 00:16:56.950 submit (in ns) avg, min, max = 7756.8, 3192.4, 3998879.0 00:16:56.950 complete (in ns) avg, min, max = 21603.0, 1760.0, 3997832.4 00:16:56.950 00:16:56.950 Submit histogram 00:16:56.950 ================ 00:16:56.950 Range in us Cumulative Count 00:16:56.950 3.185 - 3.200: 0.0060% ( 1) 00:16:56.950 3.200 - 3.215: 0.0423% ( 6) 00:16:56.950 3.215 - 3.230: 0.1208% ( 13) 00:16:56.950 3.230 - 3.246: 0.4649% ( 57) 00:16:56.950 3.246 - 3.261: 1.1532% ( 114) 00:16:56.950 3.261 - 3.276: 3.8582% ( 448) 00:16:56.950 3.276 - 3.291: 9.3588% ( 911) 00:16:56.950 3.291 - 3.307: 15.3907% ( 999) 00:16:56.950 3.307 - 3.322: 21.7727% ( 1057) 00:16:56.950 3.322 - 3.337: 28.3601% ( 1091) 00:16:56.950 3.337 - 3.352: 34.2531% ( 976) 00:16:56.950 3.352 - 3.368: 40.2005% ( 985) 00:16:56.950 3.368 - 3.383: 46.0270% ( 965) 00:16:56.950 3.383 - 3.398: 51.9865% ( 987) 00:16:56.950 3.398 - 3.413: 57.3542% ( 889) 00:16:56.950 3.413 - 3.429: 65.0344% ( 1272) 00:16:56.950 3.429 - 3.444: 71.9901% ( 1152) 00:16:56.950 3.444 - 3.459: 77.2854% ( 877) 00:16:56.950 3.459 - 3.474: 81.9829% ( 778) 00:16:56.950 3.474 - 3.490: 85.0682% ( 511) 00:16:56.950 3.490 - 3.505: 87.1815% ( 350) 00:16:56.950 3.505 - 3.520: 87.9906% ( 134) 00:16:56.950 3.520 - 3.535: 88.4555% ( 77) 00:16:56.950 3.535 - 3.550: 88.8359% ( 63) 00:16:56.950 3.550 - 3.566: 89.2585% ( 70) 00:16:56.950 3.566 - 3.581: 89.9288% ( 111) 00:16:56.950 3.581 - 3.596: 90.6412% ( 118) 00:16:56.950 3.596 - 3.611: 91.5228% ( 146) 00:16:56.950 3.611 - 3.627: 92.3318% ( 134) 00:16:56.950 3.627 - 3.642: 93.2556% ( 153) 00:16:56.950 3.642 - 3.657: 94.1613% ( 150) 00:16:56.950 3.657 - 3.672: 94.9644% ( 133) 00:16:56.950 3.672 - 3.688: 95.9606% ( 165) 00:16:56.950 3.688 - 3.703: 96.7335% ( 128) 00:16:56.950 3.703 - 3.718: 97.3735% ( 106) 00:16:56.950 3.718 - 3.733: 97.9350% ( 93) 00:16:56.950 3.733 - 3.749: 98.3396% ( 67) 00:16:56.950 3.749 - 3.764: 98.6717% ( 55) 00:16:56.950 3.764 - 3.779: 99.0158% ( 57) 00:16:56.950 3.779 - 3.794: 99.2513% ( 39) 00:16:56.950 3.794 - 3.810: 99.3841% ( 22) 00:16:56.950 3.810 - 3.825: 99.4928% ( 18) 00:16:56.950 3.825 - 3.840: 99.5894% ( 16) 00:16:56.950 3.840 - 3.855: 99.6196% ( 5) 00:16:56.950 3.855 - 3.870: 99.6317% ( 2) 00:16:56.950 4.907 - 4.937: 99.6377% ( 1) 00:16:56.950 4.998 - 5.029: 99.6438% ( 1) 00:16:56.950 5.090 - 5.120: 99.6498% ( 1) 00:16:56.950 5.211 - 5.242: 99.6558% ( 1) 00:16:56.950 5.272 - 5.303: 99.6619% ( 1) 00:16:56.950 5.364 - 5.394: 99.6679% ( 1) 00:16:56.950 5.455 - 5.486: 99.6740% ( 1) 00:16:56.950 5.547 - 5.577: 99.6860% ( 2) 00:16:56.950 5.577 - 5.608: 99.6981% ( 2) 00:16:56.950 5.608 - 5.638: 99.7041% ( 1) 00:16:56.950 5.638 - 5.669: 99.7162% ( 2) 00:16:56.950 5.669 - 5.699: 99.7223% ( 1) 00:16:56.950 5.699 - 5.730: 99.7343% ( 2) 00:16:56.950 5.730 - 5.760: 99.7404% ( 1) 00:16:56.950 5.790 - 5.821: 99.7524% ( 2) 00:16:56.950 5.821 - 5.851: 99.7585% ( 1) 00:16:56.950 5.851 - 5.882: 99.7645% ( 1) 00:16:56.950 5.912 - 5.943: 99.7766% ( 2) 00:16:56.950 5.943 - 5.973: 99.7826% ( 1) 00:16:56.950 5.973 - 6.004: 99.7947% ( 2) 00:16:56.950 6.004 - 6.034: 99.8007% ( 1) 00:16:56.950 6.095 - 6.126: 99.8189% ( 3) 00:16:56.950 6.187 - 6.217: 99.8249% ( 1) 00:16:56.950 6.430 - 6.461: 99.8309% ( 1) 00:16:56.950 6.491 - 6.522: 99.8370% ( 1) 00:16:56.950 6.522 - 6.552: 99.8430% ( 1) 00:16:56.950 6.552 - 6.583: 99.8491% ( 1) 00:16:56.950 6.705 - 6.735: 99.8551% ( 1) 00:16:56.950 7.040 - 7.070: 99.8611% ( 1) 00:16:56.950 7.253 - 7.284: 99.8672% ( 1) 00:16:56.950 7.314 - 7.345: 99.8792% ( 2) 00:16:56.950 7.528 - 7.558: 99.8853% ( 1) 00:16:56.950 7.924 - 7.985: 99.8913% ( 1) 00:16:56.950 3994.575 - 4025.783: 100.0000% ( 18) 00:16:56.950 00:16:56.950 [2024-11-19 10:44:46.390451] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:56.950 Complete histogram 00:16:56.950 ================== 00:16:56.950 Range in us Cumulative Count 00:16:56.950 1.760 - 1.768: 0.0483% ( 8) 00:16:56.950 1.768 - 1.775: 0.4347% ( 64) 00:16:56.950 1.775 - 1.783: 1.0264% ( 98) 00:16:56.950 1.783 - 1.790: 1.6846% ( 109) 00:16:56.950 1.790 - 1.798: 2.2280% ( 90) 00:16:56.950 1.798 - 1.806: 3.2726% ( 173) 00:16:56.950 1.806 - 1.813: 13.7665% ( 1738) 00:16:56.950 1.813 - 1.821: 44.1010% ( 5024) 00:16:56.950 1.821 - 1.829: 70.0036% ( 4290) 00:16:56.950 1.829 - 1.836: 81.3549% ( 1880) 00:16:56.950 1.836 - 1.844: 87.9966% ( 1100) 00:16:56.950 1.844 - 1.851: 91.9515% ( 655) 00:16:56.950 1.851 - 1.859: 93.6300% ( 278) 00:16:56.950 1.859 - 1.867: 94.3666% ( 122) 00:16:56.950 1.867 - 1.874: 94.8074% ( 73) 00:16:56.950 1.874 - 1.882: 95.2663% ( 76) 00:16:56.950 1.882 - 1.890: 96.1720% ( 150) 00:16:56.950 1.890 - 1.897: 97.2226% ( 174) 00:16:56.950 1.897 - 1.905: 98.2611% ( 172) 00:16:56.950 1.905 - 1.912: 98.9011% ( 106) 00:16:56.950 1.912 - 1.920: 99.1668% ( 44) 00:16:56.950 1.920 - 1.928: 99.2936% ( 21) 00:16:56.950 1.928 - 1.935: 99.3358% ( 7) 00:16:56.950 1.935 - 1.943: 99.3419% ( 1) 00:16:56.950 1.943 - 1.950: 99.3479% ( 1) 00:16:56.951 1.950 - 1.966: 99.3539% ( 1) 00:16:56.951 1.981 - 1.996: 99.3600% ( 1) 00:16:56.951 2.103 - 2.118: 99.3660% ( 1) 00:16:56.951 3.383 - 3.398: 99.3721% ( 1) 00:16:56.951 3.413 - 3.429: 99.3781% ( 1) 00:16:56.951 3.490 - 3.505: 99.3841% ( 1) 00:16:56.951 3.550 - 3.566: 99.3902% ( 1) 00:16:56.951 3.642 - 3.657: 99.3962% ( 1) 00:16:56.951 3.688 - 3.703: 99.4022% ( 1) 00:16:56.951 3.703 - 3.718: 99.4083% ( 1) 00:16:56.951 3.779 - 3.794: 99.4143% ( 1) 00:16:56.951 3.901 - 3.931: 99.4204% ( 1) 00:16:56.951 3.931 - 3.962: 99.4264% ( 1) 00:16:56.951 3.962 - 3.992: 99.4324% ( 1) 00:16:56.951 3.992 - 4.023: 99.4385% ( 1) 00:16:56.951 4.084 - 4.114: 99.4505% ( 2) 00:16:56.951 4.114 - 4.145: 99.4566% ( 1) 00:16:56.951 4.175 - 4.206: 99.4626% ( 1) 00:16:56.951 4.297 - 4.328: 99.4687% ( 1) 00:16:56.951 4.419 - 4.450: 99.4747% ( 1) 00:16:56.951 4.571 - 4.602: 99.4807% ( 1) 00:16:56.951 4.815 - 4.846: 99.4868% ( 1) 00:16:56.951 5.425 - 5.455: 99.4928% ( 1) 00:16:56.951 5.486 - 5.516: 99.4989% ( 1) 00:16:56.951 6.187 - 6.217: 99.5049% ( 1) 00:16:56.951 3978.971 - 3994.575: 99.5230% ( 3) 00:16:56.951 3994.575 - 4025.783: 100.0000% ( 79) 00:16:56.951 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:56.951 [ 00:16:56.951 { 00:16:56.951 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:56.951 "subtype": "Discovery", 00:16:56.951 "listen_addresses": [], 00:16:56.951 "allow_any_host": true, 00:16:56.951 "hosts": [] 00:16:56.951 }, 00:16:56.951 { 00:16:56.951 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:56.951 "subtype": "NVMe", 00:16:56.951 "listen_addresses": [ 00:16:56.951 { 00:16:56.951 "trtype": "VFIOUSER", 00:16:56.951 "adrfam": "IPv4", 00:16:56.951 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:56.951 "trsvcid": "0" 00:16:56.951 } 00:16:56.951 ], 00:16:56.951 "allow_any_host": true, 00:16:56.951 "hosts": [], 00:16:56.951 "serial_number": "SPDK1", 00:16:56.951 "model_number": "SPDK bdev Controller", 00:16:56.951 "max_namespaces": 32, 00:16:56.951 "min_cntlid": 1, 00:16:56.951 "max_cntlid": 65519, 00:16:56.951 "namespaces": [ 00:16:56.951 { 00:16:56.951 "nsid": 1, 00:16:56.951 "bdev_name": "Malloc1", 00:16:56.951 "name": "Malloc1", 00:16:56.951 "nguid": "FA562994FF2842FA83BB8EF77DEF5DC5", 00:16:56.951 "uuid": "fa562994-ff28-42fa-83bb-8ef77def5dc5" 00:16:56.951 } 00:16:56.951 ] 00:16:56.951 }, 00:16:56.951 { 00:16:56.951 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:56.951 "subtype": "NVMe", 00:16:56.951 "listen_addresses": [ 00:16:56.951 { 00:16:56.951 "trtype": "VFIOUSER", 00:16:56.951 "adrfam": "IPv4", 00:16:56.951 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:56.951 "trsvcid": "0" 00:16:56.951 } 00:16:56.951 ], 00:16:56.951 "allow_any_host": true, 00:16:56.951 "hosts": [], 00:16:56.951 "serial_number": "SPDK2", 00:16:56.951 "model_number": "SPDK bdev Controller", 00:16:56.951 "max_namespaces": 32, 00:16:56.951 "min_cntlid": 1, 00:16:56.951 "max_cntlid": 65519, 00:16:56.951 "namespaces": [ 00:16:56.951 { 00:16:56.951 "nsid": 1, 00:16:56.951 "bdev_name": "Malloc2", 00:16:56.951 "name": "Malloc2", 00:16:56.951 "nguid": "2C468F4A649D406AB9647A5ECE3152BC", 00:16:56.951 "uuid": "2c468f4a-649d-406a-b964-7a5ece3152bc" 00:16:56.951 } 00:16:56.951 ] 00:16:56.951 } 00:16:56.951 ] 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3895847 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:56.951 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:57.208 [2024-11-19 10:44:46.814624] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:57.208 Malloc3 00:16:57.208 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:57.466 [2024-11-19 10:44:47.035329] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:57.466 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:57.466 Asynchronous Event Request test 00:16:57.466 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:57.466 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:57.466 Registering asynchronous event callbacks... 00:16:57.466 Starting namespace attribute notice tests for all controllers... 00:16:57.466 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:57.466 aer_cb - Changed Namespace 00:16:57.466 Cleaning up... 00:16:57.466 [ 00:16:57.466 { 00:16:57.466 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:57.466 "subtype": "Discovery", 00:16:57.466 "listen_addresses": [], 00:16:57.466 "allow_any_host": true, 00:16:57.466 "hosts": [] 00:16:57.466 }, 00:16:57.466 { 00:16:57.466 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:57.466 "subtype": "NVMe", 00:16:57.466 "listen_addresses": [ 00:16:57.466 { 00:16:57.466 "trtype": "VFIOUSER", 00:16:57.466 "adrfam": "IPv4", 00:16:57.466 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:57.466 "trsvcid": "0" 00:16:57.466 } 00:16:57.466 ], 00:16:57.466 "allow_any_host": true, 00:16:57.466 "hosts": [], 00:16:57.466 "serial_number": "SPDK1", 00:16:57.466 "model_number": "SPDK bdev Controller", 00:16:57.466 "max_namespaces": 32, 00:16:57.466 "min_cntlid": 1, 00:16:57.466 "max_cntlid": 65519, 00:16:57.466 "namespaces": [ 00:16:57.466 { 00:16:57.466 "nsid": 1, 00:16:57.466 "bdev_name": "Malloc1", 00:16:57.466 "name": "Malloc1", 00:16:57.466 "nguid": "FA562994FF2842FA83BB8EF77DEF5DC5", 00:16:57.466 "uuid": "fa562994-ff28-42fa-83bb-8ef77def5dc5" 00:16:57.466 }, 00:16:57.466 { 00:16:57.466 "nsid": 2, 00:16:57.466 "bdev_name": "Malloc3", 00:16:57.466 "name": "Malloc3", 00:16:57.466 "nguid": "7CE6617C75E8484B9DFD4F4EA9065D83", 00:16:57.466 "uuid": "7ce6617c-75e8-484b-9dfd-4f4ea9065d83" 00:16:57.466 } 00:16:57.466 ] 00:16:57.466 }, 00:16:57.466 { 00:16:57.466 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:57.466 "subtype": "NVMe", 00:16:57.466 "listen_addresses": [ 00:16:57.466 { 00:16:57.466 "trtype": "VFIOUSER", 00:16:57.466 "adrfam": "IPv4", 00:16:57.466 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:57.466 "trsvcid": "0" 00:16:57.466 } 00:16:57.466 ], 00:16:57.466 "allow_any_host": true, 00:16:57.466 "hosts": [], 00:16:57.466 "serial_number": "SPDK2", 00:16:57.466 "model_number": "SPDK bdev Controller", 00:16:57.466 "max_namespaces": 32, 00:16:57.466 "min_cntlid": 1, 00:16:57.466 "max_cntlid": 65519, 00:16:57.466 "namespaces": [ 00:16:57.466 { 00:16:57.466 "nsid": 1, 00:16:57.466 "bdev_name": "Malloc2", 00:16:57.466 "name": "Malloc2", 00:16:57.466 "nguid": "2C468F4A649D406AB9647A5ECE3152BC", 00:16:57.466 "uuid": "2c468f4a-649d-406a-b964-7a5ece3152bc" 00:16:57.466 } 00:16:57.466 ] 00:16:57.466 } 00:16:57.466 ] 00:16:57.466 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3895847 00:16:57.466 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:57.466 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:57.466 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:57.467 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:57.726 [2024-11-19 10:44:47.273102] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:16:57.726 [2024-11-19 10:44:47.273152] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895860 ] 00:16:57.726 [2024-11-19 10:44:47.314540] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:57.726 [2024-11-19 10:44:47.323445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:57.726 [2024-11-19 10:44:47.323470] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5a8db9e000 00:16:57.726 [2024-11-19 10:44:47.324449] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:57.726 [2024-11-19 10:44:47.325453] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:57.726 [2024-11-19 10:44:47.326462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:57.726 [2024-11-19 10:44:47.327463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:57.726 [2024-11-19 10:44:47.328467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:57.726 [2024-11-19 10:44:47.329480] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:57.726 [2024-11-19 10:44:47.330492] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:57.726 [2024-11-19 10:44:47.331494] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:57.726 [2024-11-19 10:44:47.332502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:57.726 [2024-11-19 10:44:47.332514] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5a8db93000 00:16:57.726 [2024-11-19 10:44:47.333430] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:57.726 [2024-11-19 10:44:47.342782] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:57.726 [2024-11-19 10:44:47.342805] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:57.726 [2024-11-19 10:44:47.347890] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:57.726 [2024-11-19 10:44:47.347927] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:57.726 [2024-11-19 10:44:47.347994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:57.726 [2024-11-19 10:44:47.348011] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:57.726 [2024-11-19 10:44:47.348016] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:57.726 [2024-11-19 10:44:47.348896] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:57.726 [2024-11-19 10:44:47.348905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:57.726 [2024-11-19 10:44:47.348912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:57.726 [2024-11-19 10:44:47.349897] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:57.726 [2024-11-19 10:44:47.349906] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:57.726 [2024-11-19 10:44:47.349912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:57.726 [2024-11-19 10:44:47.350906] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:57.726 [2024-11-19 10:44:47.350915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:57.726 [2024-11-19 10:44:47.351917] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:57.726 [2024-11-19 10:44:47.351924] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:57.726 [2024-11-19 10:44:47.351929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:57.726 [2024-11-19 10:44:47.351935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:57.726 [2024-11-19 10:44:47.352042] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:57.726 [2024-11-19 10:44:47.352047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:57.726 [2024-11-19 10:44:47.352051] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:57.726 [2024-11-19 10:44:47.352927] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:57.726 [2024-11-19 10:44:47.353931] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:57.726 [2024-11-19 10:44:47.354936] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:57.726 [2024-11-19 10:44:47.355936] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:57.726 [2024-11-19 10:44:47.355972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:57.726 [2024-11-19 10:44:47.356945] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:57.726 [2024-11-19 10:44:47.356953] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:57.726 [2024-11-19 10:44:47.356960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:57.726 [2024-11-19 10:44:47.356976] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:57.726 [2024-11-19 10:44:47.356986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:57.726 [2024-11-19 10:44:47.356997] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:57.726 [2024-11-19 10:44:47.357002] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:57.726 [2024-11-19 10:44:47.357005] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:57.726 [2024-11-19 10:44:47.357016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:57.726 [2024-11-19 10:44:47.363208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:57.726 [2024-11-19 10:44:47.363219] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:57.726 [2024-11-19 10:44:47.363224] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:57.726 [2024-11-19 10:44:47.363228] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:57.726 [2024-11-19 10:44:47.363232] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:57.726 [2024-11-19 10:44:47.363238] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:57.726 [2024-11-19 10:44:47.363243] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:57.726 [2024-11-19 10:44:47.363247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:57.726 [2024-11-19 10:44:47.363255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.363265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.370207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.370220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.727 [2024-11-19 10:44:47.370228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.727 [2024-11-19 10:44:47.370235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.727 [2024-11-19 10:44:47.370243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.727 [2024-11-19 10:44:47.370247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.370253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.370261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.379207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.379217] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:57.727 [2024-11-19 10:44:47.379222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.379228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.379233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.379241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.387208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.387265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.387273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.387280] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:57.727 [2024-11-19 10:44:47.387284] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:57.727 [2024-11-19 10:44:47.387287] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:57.727 [2024-11-19 10:44:47.387293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.395205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.395216] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:57.727 [2024-11-19 10:44:47.395224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.395231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.395237] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:57.727 [2024-11-19 10:44:47.395241] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:57.727 [2024-11-19 10:44:47.395244] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:57.727 [2024-11-19 10:44:47.395249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.403207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.403220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.403227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.403234] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:57.727 [2024-11-19 10:44:47.403238] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:57.727 [2024-11-19 10:44:47.403243] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:57.727 [2024-11-19 10:44:47.403249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.411206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.411215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.411221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.411228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.411233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.411238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.411242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.411247] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:57.727 [2024-11-19 10:44:47.411251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:57.727 [2024-11-19 10:44:47.411255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:57.727 [2024-11-19 10:44:47.411271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.419206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.419219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.427206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.427217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.435207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.435218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.443207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.443222] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:57.727 [2024-11-19 10:44:47.443226] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:57.727 [2024-11-19 10:44:47.443229] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:57.727 [2024-11-19 10:44:47.443232] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:57.727 [2024-11-19 10:44:47.443235] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:57.727 [2024-11-19 10:44:47.443241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:57.727 [2024-11-19 10:44:47.443250] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:57.727 [2024-11-19 10:44:47.443254] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:57.727 [2024-11-19 10:44:47.443257] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:57.727 [2024-11-19 10:44:47.443262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.443268] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:57.727 [2024-11-19 10:44:47.443272] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:57.727 [2024-11-19 10:44:47.443275] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:57.727 [2024-11-19 10:44:47.443280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.443287] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:57.727 [2024-11-19 10:44:47.443291] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:57.727 [2024-11-19 10:44:47.443294] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:57.727 [2024-11-19 10:44:47.443299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:57.727 [2024-11-19 10:44:47.451209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.451223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.451232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:57.727 [2024-11-19 10:44:47.451239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:57.727 ===================================================== 00:16:57.727 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:57.727 ===================================================== 00:16:57.727 Controller Capabilities/Features 00:16:57.727 ================================ 00:16:57.727 Vendor ID: 4e58 00:16:57.727 Subsystem Vendor ID: 4e58 00:16:57.727 Serial Number: SPDK2 00:16:57.727 Model Number: SPDK bdev Controller 00:16:57.727 Firmware Version: 25.01 00:16:57.727 Recommended Arb Burst: 6 00:16:57.727 IEEE OUI Identifier: 8d 6b 50 00:16:57.728 Multi-path I/O 00:16:57.728 May have multiple subsystem ports: Yes 00:16:57.728 May have multiple controllers: Yes 00:16:57.728 Associated with SR-IOV VF: No 00:16:57.728 Max Data Transfer Size: 131072 00:16:57.728 Max Number of Namespaces: 32 00:16:57.728 Max Number of I/O Queues: 127 00:16:57.728 NVMe Specification Version (VS): 1.3 00:16:57.728 NVMe Specification Version (Identify): 1.3 00:16:57.728 Maximum Queue Entries: 256 00:16:57.728 Contiguous Queues Required: Yes 00:16:57.728 Arbitration Mechanisms Supported 00:16:57.728 Weighted Round Robin: Not Supported 00:16:57.728 Vendor Specific: Not Supported 00:16:57.728 Reset Timeout: 15000 ms 00:16:57.728 Doorbell Stride: 4 bytes 00:16:57.728 NVM Subsystem Reset: Not Supported 00:16:57.728 Command Sets Supported 00:16:57.728 NVM Command Set: Supported 00:16:57.728 Boot Partition: Not Supported 00:16:57.728 Memory Page Size Minimum: 4096 bytes 00:16:57.728 Memory Page Size Maximum: 4096 bytes 00:16:57.728 Persistent Memory Region: Not Supported 00:16:57.728 Optional Asynchronous Events Supported 00:16:57.728 Namespace Attribute Notices: Supported 00:16:57.728 Firmware Activation Notices: Not Supported 00:16:57.728 ANA Change Notices: Not Supported 00:16:57.728 PLE Aggregate Log Change Notices: Not Supported 00:16:57.728 LBA Status Info Alert Notices: Not Supported 00:16:57.728 EGE Aggregate Log Change Notices: Not Supported 00:16:57.728 Normal NVM Subsystem Shutdown event: Not Supported 00:16:57.728 Zone Descriptor Change Notices: Not Supported 00:16:57.728 Discovery Log Change Notices: Not Supported 00:16:57.728 Controller Attributes 00:16:57.728 128-bit Host Identifier: Supported 00:16:57.728 Non-Operational Permissive Mode: Not Supported 00:16:57.728 NVM Sets: Not Supported 00:16:57.728 Read Recovery Levels: Not Supported 00:16:57.728 Endurance Groups: Not Supported 00:16:57.728 Predictable Latency Mode: Not Supported 00:16:57.728 Traffic Based Keep ALive: Not Supported 00:16:57.728 Namespace Granularity: Not Supported 00:16:57.728 SQ Associations: Not Supported 00:16:57.728 UUID List: Not Supported 00:16:57.728 Multi-Domain Subsystem: Not Supported 00:16:57.728 Fixed Capacity Management: Not Supported 00:16:57.728 Variable Capacity Management: Not Supported 00:16:57.728 Delete Endurance Group: Not Supported 00:16:57.728 Delete NVM Set: Not Supported 00:16:57.728 Extended LBA Formats Supported: Not Supported 00:16:57.728 Flexible Data Placement Supported: Not Supported 00:16:57.728 00:16:57.728 Controller Memory Buffer Support 00:16:57.728 ================================ 00:16:57.728 Supported: No 00:16:57.728 00:16:57.728 Persistent Memory Region Support 00:16:57.728 ================================ 00:16:57.728 Supported: No 00:16:57.728 00:16:57.728 Admin Command Set Attributes 00:16:57.728 ============================ 00:16:57.728 Security Send/Receive: Not Supported 00:16:57.728 Format NVM: Not Supported 00:16:57.728 Firmware Activate/Download: Not Supported 00:16:57.728 Namespace Management: Not Supported 00:16:57.728 Device Self-Test: Not Supported 00:16:57.728 Directives: Not Supported 00:16:57.728 NVMe-MI: Not Supported 00:16:57.728 Virtualization Management: Not Supported 00:16:57.728 Doorbell Buffer Config: Not Supported 00:16:57.728 Get LBA Status Capability: Not Supported 00:16:57.728 Command & Feature Lockdown Capability: Not Supported 00:16:57.728 Abort Command Limit: 4 00:16:57.728 Async Event Request Limit: 4 00:16:57.728 Number of Firmware Slots: N/A 00:16:57.728 Firmware Slot 1 Read-Only: N/A 00:16:57.728 Firmware Activation Without Reset: N/A 00:16:57.728 Multiple Update Detection Support: N/A 00:16:57.728 Firmware Update Granularity: No Information Provided 00:16:57.728 Per-Namespace SMART Log: No 00:16:57.728 Asymmetric Namespace Access Log Page: Not Supported 00:16:57.728 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:57.728 Command Effects Log Page: Supported 00:16:57.728 Get Log Page Extended Data: Supported 00:16:57.728 Telemetry Log Pages: Not Supported 00:16:57.728 Persistent Event Log Pages: Not Supported 00:16:57.728 Supported Log Pages Log Page: May Support 00:16:57.728 Commands Supported & Effects Log Page: Not Supported 00:16:57.728 Feature Identifiers & Effects Log Page:May Support 00:16:57.728 NVMe-MI Commands & Effects Log Page: May Support 00:16:57.728 Data Area 4 for Telemetry Log: Not Supported 00:16:57.728 Error Log Page Entries Supported: 128 00:16:57.728 Keep Alive: Supported 00:16:57.728 Keep Alive Granularity: 10000 ms 00:16:57.728 00:16:57.728 NVM Command Set Attributes 00:16:57.728 ========================== 00:16:57.728 Submission Queue Entry Size 00:16:57.728 Max: 64 00:16:57.728 Min: 64 00:16:57.728 Completion Queue Entry Size 00:16:57.728 Max: 16 00:16:57.728 Min: 16 00:16:57.728 Number of Namespaces: 32 00:16:57.728 Compare Command: Supported 00:16:57.728 Write Uncorrectable Command: Not Supported 00:16:57.728 Dataset Management Command: Supported 00:16:57.728 Write Zeroes Command: Supported 00:16:57.728 Set Features Save Field: Not Supported 00:16:57.728 Reservations: Not Supported 00:16:57.728 Timestamp: Not Supported 00:16:57.728 Copy: Supported 00:16:57.728 Volatile Write Cache: Present 00:16:57.728 Atomic Write Unit (Normal): 1 00:16:57.728 Atomic Write Unit (PFail): 1 00:16:57.728 Atomic Compare & Write Unit: 1 00:16:57.728 Fused Compare & Write: Supported 00:16:57.728 Scatter-Gather List 00:16:57.728 SGL Command Set: Supported (Dword aligned) 00:16:57.728 SGL Keyed: Not Supported 00:16:57.728 SGL Bit Bucket Descriptor: Not Supported 00:16:57.728 SGL Metadata Pointer: Not Supported 00:16:57.728 Oversized SGL: Not Supported 00:16:57.728 SGL Metadata Address: Not Supported 00:16:57.728 SGL Offset: Not Supported 00:16:57.728 Transport SGL Data Block: Not Supported 00:16:57.728 Replay Protected Memory Block: Not Supported 00:16:57.728 00:16:57.728 Firmware Slot Information 00:16:57.728 ========================= 00:16:57.728 Active slot: 1 00:16:57.728 Slot 1 Firmware Revision: 25.01 00:16:57.728 00:16:57.728 00:16:57.728 Commands Supported and Effects 00:16:57.728 ============================== 00:16:57.728 Admin Commands 00:16:57.728 -------------- 00:16:57.728 Get Log Page (02h): Supported 00:16:57.728 Identify (06h): Supported 00:16:57.728 Abort (08h): Supported 00:16:57.728 Set Features (09h): Supported 00:16:57.728 Get Features (0Ah): Supported 00:16:57.728 Asynchronous Event Request (0Ch): Supported 00:16:57.728 Keep Alive (18h): Supported 00:16:57.728 I/O Commands 00:16:57.728 ------------ 00:16:57.728 Flush (00h): Supported LBA-Change 00:16:57.728 Write (01h): Supported LBA-Change 00:16:57.728 Read (02h): Supported 00:16:57.728 Compare (05h): Supported 00:16:57.728 Write Zeroes (08h): Supported LBA-Change 00:16:57.728 Dataset Management (09h): Supported LBA-Change 00:16:57.728 Copy (19h): Supported LBA-Change 00:16:57.728 00:16:57.728 Error Log 00:16:57.728 ========= 00:16:57.728 00:16:57.728 Arbitration 00:16:57.728 =========== 00:16:57.728 Arbitration Burst: 1 00:16:57.728 00:16:57.728 Power Management 00:16:57.728 ================ 00:16:57.728 Number of Power States: 1 00:16:57.728 Current Power State: Power State #0 00:16:57.728 Power State #0: 00:16:57.728 Max Power: 0.00 W 00:16:57.728 Non-Operational State: Operational 00:16:57.728 Entry Latency: Not Reported 00:16:57.728 Exit Latency: Not Reported 00:16:57.728 Relative Read Throughput: 0 00:16:57.728 Relative Read Latency: 0 00:16:57.728 Relative Write Throughput: 0 00:16:57.728 Relative Write Latency: 0 00:16:57.728 Idle Power: Not Reported 00:16:57.728 Active Power: Not Reported 00:16:57.728 Non-Operational Permissive Mode: Not Supported 00:16:57.728 00:16:57.728 Health Information 00:16:57.728 ================== 00:16:57.728 Critical Warnings: 00:16:57.728 Available Spare Space: OK 00:16:57.728 Temperature: OK 00:16:57.728 Device Reliability: OK 00:16:57.728 Read Only: No 00:16:57.728 Volatile Memory Backup: OK 00:16:57.728 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:57.728 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:57.728 Available Spare: 0% 00:16:57.728 Available Sp[2024-11-19 10:44:47.451327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:57.728 [2024-11-19 10:44:47.459207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:57.728 [2024-11-19 10:44:47.459235] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:57.728 [2024-11-19 10:44:47.459244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.729 [2024-11-19 10:44:47.459250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.729 [2024-11-19 10:44:47.459255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.729 [2024-11-19 10:44:47.459261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.729 [2024-11-19 10:44:47.459302] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:57.729 [2024-11-19 10:44:47.459313] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:57.729 [2024-11-19 10:44:47.460305] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:57.729 [2024-11-19 10:44:47.460351] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:57.729 [2024-11-19 10:44:47.460357] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:57.729 [2024-11-19 10:44:47.461311] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:57.729 [2024-11-19 10:44:47.461322] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:57.729 [2024-11-19 10:44:47.461369] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:57.729 [2024-11-19 10:44:47.464207] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:57.729 are Threshold: 0% 00:16:57.729 Life Percentage Used: 0% 00:16:57.729 Data Units Read: 0 00:16:57.729 Data Units Written: 0 00:16:57.729 Host Read Commands: 0 00:16:57.729 Host Write Commands: 0 00:16:57.729 Controller Busy Time: 0 minutes 00:16:57.729 Power Cycles: 0 00:16:57.729 Power On Hours: 0 hours 00:16:57.729 Unsafe Shutdowns: 0 00:16:57.729 Unrecoverable Media Errors: 0 00:16:57.729 Lifetime Error Log Entries: 0 00:16:57.729 Warning Temperature Time: 0 minutes 00:16:57.729 Critical Temperature Time: 0 minutes 00:16:57.729 00:16:57.729 Number of Queues 00:16:57.729 ================ 00:16:57.729 Number of I/O Submission Queues: 127 00:16:57.729 Number of I/O Completion Queues: 127 00:16:57.729 00:16:57.729 Active Namespaces 00:16:57.729 ================= 00:16:57.729 Namespace ID:1 00:16:57.729 Error Recovery Timeout: Unlimited 00:16:57.729 Command Set Identifier: NVM (00h) 00:16:57.729 Deallocate: Supported 00:16:57.729 Deallocated/Unwritten Error: Not Supported 00:16:57.729 Deallocated Read Value: Unknown 00:16:57.729 Deallocate in Write Zeroes: Not Supported 00:16:57.729 Deallocated Guard Field: 0xFFFF 00:16:57.729 Flush: Supported 00:16:57.729 Reservation: Supported 00:16:57.729 Namespace Sharing Capabilities: Multiple Controllers 00:16:57.729 Size (in LBAs): 131072 (0GiB) 00:16:57.729 Capacity (in LBAs): 131072 (0GiB) 00:16:57.729 Utilization (in LBAs): 131072 (0GiB) 00:16:57.729 NGUID: 2C468F4A649D406AB9647A5ECE3152BC 00:16:57.729 UUID: 2c468f4a-649d-406a-b964-7a5ece3152bc 00:16:57.729 Thin Provisioning: Not Supported 00:16:57.729 Per-NS Atomic Units: Yes 00:16:57.729 Atomic Boundary Size (Normal): 0 00:16:57.729 Atomic Boundary Size (PFail): 0 00:16:57.729 Atomic Boundary Offset: 0 00:16:57.729 Maximum Single Source Range Length: 65535 00:16:57.729 Maximum Copy Length: 65535 00:16:57.729 Maximum Source Range Count: 1 00:16:57.729 NGUID/EUI64 Never Reused: No 00:16:57.729 Namespace Write Protected: No 00:16:57.729 Number of LBA Formats: 1 00:16:57.729 Current LBA Format: LBA Format #00 00:16:57.729 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:57.729 00:16:57.729 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:57.986 [2024-11-19 10:44:47.692579] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:03.237 Initializing NVMe Controllers 00:17:03.237 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:03.237 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:03.237 Initialization complete. Launching workers. 00:17:03.237 ======================================================== 00:17:03.237 Latency(us) 00:17:03.237 Device Information : IOPS MiB/s Average min max 00:17:03.237 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39972.02 156.14 3202.06 952.21 8198.45 00:17:03.237 ======================================================== 00:17:03.237 Total : 39972.02 156.14 3202.06 952.21 8198.45 00:17:03.237 00:17:03.237 [2024-11-19 10:44:52.795462] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:03.237 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:03.494 [2024-11-19 10:44:53.028179] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:08.746 Initializing NVMe Controllers 00:17:08.746 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:08.746 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:08.746 Initialization complete. Launching workers. 00:17:08.746 ======================================================== 00:17:08.746 Latency(us) 00:17:08.746 Device Information : IOPS MiB/s Average min max 00:17:08.746 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39959.40 156.09 3203.32 960.27 9394.55 00:17:08.746 ======================================================== 00:17:08.746 Total : 39959.40 156.09 3203.32 960.27 9394.55 00:17:08.746 00:17:08.746 [2024-11-19 10:44:58.049034] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:08.746 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:08.746 [2024-11-19 10:44:58.261573] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:14.019 [2024-11-19 10:45:03.397303] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:14.019 Initializing NVMe Controllers 00:17:14.019 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:14.019 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:14.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:14.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:14.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:14.019 Initialization complete. Launching workers. 00:17:14.019 Starting thread on core 2 00:17:14.019 Starting thread on core 3 00:17:14.019 Starting thread on core 1 00:17:14.019 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:14.019 [2024-11-19 10:45:03.696604] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:17.336 [2024-11-19 10:45:06.750888] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:17.336 Initializing NVMe Controllers 00:17:17.336 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:17.336 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:17.336 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:17.336 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:17.336 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:17.336 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:17.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:17.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:17.336 Initialization complete. Launching workers. 00:17:17.336 Starting thread on core 1 with urgent priority queue 00:17:17.336 Starting thread on core 2 with urgent priority queue 00:17:17.336 Starting thread on core 3 with urgent priority queue 00:17:17.336 Starting thread on core 0 with urgent priority queue 00:17:17.336 SPDK bdev Controller (SPDK2 ) core 0: 8306.00 IO/s 12.04 secs/100000 ios 00:17:17.336 SPDK bdev Controller (SPDK2 ) core 1: 9029.33 IO/s 11.08 secs/100000 ios 00:17:17.336 SPDK bdev Controller (SPDK2 ) core 2: 7842.00 IO/s 12.75 secs/100000 ios 00:17:17.336 SPDK bdev Controller (SPDK2 ) core 3: 8276.00 IO/s 12.08 secs/100000 ios 00:17:17.336 ======================================================== 00:17:17.336 00:17:17.336 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:17.336 [2024-11-19 10:45:07.032986] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:17.336 Initializing NVMe Controllers 00:17:17.336 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:17.336 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:17.336 Namespace ID: 1 size: 0GB 00:17:17.336 Initialization complete. 00:17:17.336 INFO: using host memory buffer for IO 00:17:17.336 Hello world! 00:17:17.336 [2024-11-19 10:45:07.043045] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:17.336 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:17.592 [2024-11-19 10:45:07.320947] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:18.957 Initializing NVMe Controllers 00:17:18.957 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:18.957 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:18.957 Initialization complete. Launching workers. 00:17:18.957 submit (in ns) avg, min, max = 6413.7, 3201.0, 4995413.3 00:17:18.957 complete (in ns) avg, min, max = 19764.7, 1762.9, 6988421.0 00:17:18.957 00:17:18.957 Submit histogram 00:17:18.957 ================ 00:17:18.957 Range in us Cumulative Count 00:17:18.957 3.200 - 3.215: 0.0673% ( 11) 00:17:18.957 3.215 - 3.230: 0.2386% ( 28) 00:17:18.957 3.230 - 3.246: 0.5567% ( 52) 00:17:18.957 3.246 - 3.261: 1.5049% ( 155) 00:17:18.957 3.261 - 3.276: 5.0957% ( 587) 00:17:18.957 3.276 - 3.291: 11.2926% ( 1013) 00:17:18.957 3.291 - 3.307: 17.1713% ( 961) 00:17:18.957 3.307 - 3.322: 23.9799% ( 1113) 00:17:18.957 3.322 - 3.337: 30.7518% ( 1107) 00:17:18.957 3.337 - 3.352: 35.9577% ( 851) 00:17:18.957 3.352 - 3.368: 41.5734% ( 918) 00:17:18.957 3.368 - 3.383: 47.4399% ( 959) 00:17:18.957 3.383 - 3.398: 52.4133% ( 813) 00:17:18.957 3.398 - 3.413: 57.2582% ( 792) 00:17:18.957 3.413 - 3.429: 64.6235% ( 1204) 00:17:18.957 3.429 - 3.444: 72.1050% ( 1223) 00:17:18.957 3.444 - 3.459: 76.7480% ( 759) 00:17:18.957 3.459 - 3.474: 81.1892% ( 726) 00:17:18.957 3.474 - 3.490: 84.1500% ( 484) 00:17:18.957 3.490 - 3.505: 86.1932% ( 334) 00:17:18.957 3.505 - 3.520: 87.2392% ( 171) 00:17:18.957 3.520 - 3.535: 87.7103% ( 77) 00:17:18.957 3.535 - 3.550: 87.9978% ( 47) 00:17:18.957 3.550 - 3.566: 88.3893% ( 64) 00:17:18.957 3.566 - 3.581: 89.0500% ( 108) 00:17:18.957 3.581 - 3.596: 89.9003% ( 139) 00:17:18.957 3.596 - 3.611: 90.7995% ( 147) 00:17:18.957 3.611 - 3.627: 91.6315% ( 136) 00:17:18.957 3.627 - 3.642: 92.6531% ( 167) 00:17:18.957 3.642 - 3.657: 93.5585% ( 148) 00:17:18.957 3.657 - 3.672: 94.5066% ( 155) 00:17:18.957 3.672 - 3.688: 95.7056% ( 196) 00:17:18.957 3.688 - 3.703: 96.7150% ( 165) 00:17:18.957 3.703 - 3.718: 97.4613% ( 122) 00:17:18.957 3.718 - 3.733: 98.1403% ( 111) 00:17:18.957 3.733 - 3.749: 98.5502% ( 67) 00:17:18.957 3.749 - 3.764: 98.8438% ( 48) 00:17:18.957 3.764 - 3.779: 99.1375% ( 48) 00:17:18.957 3.779 - 3.794: 99.2476% ( 18) 00:17:18.957 3.794 - 3.810: 99.4189% ( 28) 00:17:18.957 3.810 - 3.825: 99.5167% ( 16) 00:17:18.957 3.825 - 3.840: 99.5596% ( 7) 00:17:18.957 3.840 - 3.855: 99.5963% ( 6) 00:17:18.957 3.855 - 3.870: 99.6024% ( 1) 00:17:18.957 3.870 - 3.886: 99.6085% ( 1) 00:17:18.957 3.992 - 4.023: 99.6146% ( 1) 00:17:18.957 5.486 - 5.516: 99.6207% ( 1) 00:17:18.957 5.638 - 5.669: 99.6268% ( 1) 00:17:18.957 5.699 - 5.730: 99.6330% ( 1) 00:17:18.957 5.730 - 5.760: 99.6391% ( 1) 00:17:18.957 5.851 - 5.882: 99.6452% ( 1) 00:17:18.957 5.912 - 5.943: 99.6513% ( 1) 00:17:18.957 6.034 - 6.065: 99.6574% ( 1) 00:17:18.957 6.217 - 6.248: 99.6697% ( 2) 00:17:18.957 6.309 - 6.339: 99.6880% ( 3) 00:17:18.957 6.370 - 6.400: 99.6941% ( 1) 00:17:18.957 6.400 - 6.430: 99.7064% ( 2) 00:17:18.957 6.461 - 6.491: 99.7125% ( 1) 00:17:18.957 6.491 - 6.522: 99.7186% ( 1) 00:17:18.957 6.613 - 6.644: 99.7247% ( 1) 00:17:18.957 6.644 - 6.674: 99.7308% ( 1) 00:17:18.957 6.735 - 6.766: 99.7370% ( 1) 00:17:18.957 6.766 - 6.796: 99.7553% ( 3) 00:17:18.957 6.796 - 6.827: 99.7675% ( 2) 00:17:18.957 6.857 - 6.888: 99.7737% ( 1) 00:17:18.957 6.949 - 6.979: 99.7798% ( 1) 00:17:18.957 7.040 - 7.070: 99.7859% ( 1) 00:17:18.957 7.070 - 7.101: 99.8042% ( 3) 00:17:18.957 7.131 - 7.162: 99.8104% ( 1) 00:17:18.957 7.192 - 7.223: 99.8165% ( 1) 00:17:18.957 7.253 - 7.284: 99.8348% ( 3) 00:17:18.957 7.284 - 7.314: 99.8409% ( 1) 00:17:18.957 7.375 - 7.406: 99.8593% ( 3) 00:17:18.957 7.558 - 7.589: 99.8654% ( 1) 00:17:18.957 7.589 - 7.619: 99.8777% ( 2) 00:17:18.957 7.650 - 7.680: 99.8838% ( 1) 00:17:18.957 7.802 - 7.863: 99.8899% ( 1) 00:17:18.957 8.107 - 8.168: 99.8960% ( 1) 00:17:18.957 8.168 - 8.229: 99.9082% ( 2) 00:17:18.957 [2024-11-19 10:45:08.414181] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:18.957 9.265 - 9.326: 99.9144% ( 1) 00:17:18.957 9.326 - 9.387: 99.9205% ( 1) 00:17:18.957 14.263 - 14.324: 99.9266% ( 1) 00:17:18.957 3994.575 - 4025.783: 99.9939% ( 11) 00:17:18.957 4993.219 - 5024.427: 100.0000% ( 1) 00:17:18.957 00:17:18.957 Complete histogram 00:17:18.957 ================== 00:17:18.958 Range in us Cumulative Count 00:17:18.958 1.760 - 1.768: 0.0245% ( 4) 00:17:18.958 1.768 - 1.775: 0.1958% ( 28) 00:17:18.958 1.775 - 1.783: 0.5750% ( 62) 00:17:18.958 1.783 - 1.790: 1.3886% ( 133) 00:17:18.958 1.790 - 1.798: 2.2879% ( 147) 00:17:18.958 1.798 - 1.806: 2.7650% ( 78) 00:17:18.958 1.806 - 1.813: 4.1047% ( 219) 00:17:18.958 1.813 - 1.821: 17.9605% ( 2265) 00:17:18.958 1.821 - 1.829: 54.5727% ( 5985) 00:17:18.958 1.829 - 1.836: 81.2565% ( 4362) 00:17:18.958 1.836 - 1.844: 89.7718% ( 1392) 00:17:18.958 1.844 - 1.851: 93.0201% ( 531) 00:17:18.958 1.851 - 1.859: 95.3141% ( 375) 00:17:18.958 1.859 - 1.867: 96.2745% ( 157) 00:17:18.958 1.867 - 1.874: 96.5804% ( 50) 00:17:18.958 1.874 - 1.882: 96.8312% ( 41) 00:17:18.958 1.882 - 1.890: 97.2105% ( 62) 00:17:18.958 1.890 - 1.897: 97.6876% ( 78) 00:17:18.958 1.897 - 1.905: 98.3239% ( 104) 00:17:18.958 1.905 - 1.912: 98.8194% ( 81) 00:17:18.958 1.912 - 1.920: 99.1008% ( 46) 00:17:18.958 1.920 - 1.928: 99.2170% ( 19) 00:17:18.958 1.928 - 1.935: 99.2720% ( 9) 00:17:18.958 1.935 - 1.943: 99.2965% ( 4) 00:17:18.958 1.943 - 1.950: 99.3026% ( 1) 00:17:18.958 1.966 - 1.981: 99.3087% ( 1) 00:17:18.958 1.981 - 1.996: 99.3210% ( 2) 00:17:18.958 1.996 - 2.011: 99.3393% ( 3) 00:17:18.958 2.011 - 2.027: 99.3516% ( 2) 00:17:18.958 2.027 - 2.042: 99.3577% ( 1) 00:17:18.958 2.042 - 2.057: 99.3638% ( 1) 00:17:18.958 2.057 - 2.072: 99.3699% ( 1) 00:17:18.958 2.133 - 2.149: 99.3760% ( 1) 00:17:18.958 2.164 - 2.179: 99.3821% ( 1) 00:17:18.958 3.886 - 3.901: 99.3883% ( 1) 00:17:18.958 3.901 - 3.931: 99.3944% ( 1) 00:17:18.958 4.084 - 4.114: 99.4005% ( 1) 00:17:18.958 4.206 - 4.236: 99.4066% ( 1) 00:17:18.958 4.267 - 4.297: 99.4127% ( 1) 00:17:18.958 4.358 - 4.389: 99.4189% ( 1) 00:17:18.958 4.389 - 4.419: 99.4250% ( 1) 00:17:18.958 4.510 - 4.541: 99.4311% ( 1) 00:17:18.958 4.876 - 4.907: 99.4372% ( 1) 00:17:18.958 4.937 - 4.968: 99.4433% ( 1) 00:17:18.958 4.998 - 5.029: 99.4494% ( 1) 00:17:18.958 5.211 - 5.242: 99.4556% ( 1) 00:17:18.958 5.303 - 5.333: 99.4617% ( 1) 00:17:18.958 5.730 - 5.760: 99.4739% ( 2) 00:17:18.958 5.760 - 5.790: 99.4861% ( 2) 00:17:18.958 6.126 - 6.156: 99.4923% ( 1) 00:17:18.958 6.156 - 6.187: 99.4984% ( 1) 00:17:18.958 6.187 - 6.217: 99.5106% ( 2) 00:17:18.958 6.461 - 6.491: 99.5167% ( 1) 00:17:18.958 10.118 - 10.179: 99.5228% ( 1) 00:17:18.958 11.337 - 11.398: 99.5290% ( 1) 00:17:18.958 11.825 - 11.886: 99.5351% ( 1) 00:17:18.958 12.069 - 12.130: 99.5412% ( 1) 00:17:18.958 29.379 - 29.501: 99.5473% ( 1) 00:17:18.958 38.766 - 39.010: 99.5534% ( 1) 00:17:18.958 2371.779 - 2387.383: 99.5596% ( 1) 00:17:18.958 3994.575 - 4025.783: 99.9939% ( 71) 00:17:18.958 6959.299 - 6990.507: 100.0000% ( 1) 00:17:18.958 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:18.958 [ 00:17:18.958 { 00:17:18.958 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:18.958 "subtype": "Discovery", 00:17:18.958 "listen_addresses": [], 00:17:18.958 "allow_any_host": true, 00:17:18.958 "hosts": [] 00:17:18.958 }, 00:17:18.958 { 00:17:18.958 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:18.958 "subtype": "NVMe", 00:17:18.958 "listen_addresses": [ 00:17:18.958 { 00:17:18.958 "trtype": "VFIOUSER", 00:17:18.958 "adrfam": "IPv4", 00:17:18.958 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:18.958 "trsvcid": "0" 00:17:18.958 } 00:17:18.958 ], 00:17:18.958 "allow_any_host": true, 00:17:18.958 "hosts": [], 00:17:18.958 "serial_number": "SPDK1", 00:17:18.958 "model_number": "SPDK bdev Controller", 00:17:18.958 "max_namespaces": 32, 00:17:18.958 "min_cntlid": 1, 00:17:18.958 "max_cntlid": 65519, 00:17:18.958 "namespaces": [ 00:17:18.958 { 00:17:18.958 "nsid": 1, 00:17:18.958 "bdev_name": "Malloc1", 00:17:18.958 "name": "Malloc1", 00:17:18.958 "nguid": "FA562994FF2842FA83BB8EF77DEF5DC5", 00:17:18.958 "uuid": "fa562994-ff28-42fa-83bb-8ef77def5dc5" 00:17:18.958 }, 00:17:18.958 { 00:17:18.958 "nsid": 2, 00:17:18.958 "bdev_name": "Malloc3", 00:17:18.958 "name": "Malloc3", 00:17:18.958 "nguid": "7CE6617C75E8484B9DFD4F4EA9065D83", 00:17:18.958 "uuid": "7ce6617c-75e8-484b-9dfd-4f4ea9065d83" 00:17:18.958 } 00:17:18.958 ] 00:17:18.958 }, 00:17:18.958 { 00:17:18.958 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:18.958 "subtype": "NVMe", 00:17:18.958 "listen_addresses": [ 00:17:18.958 { 00:17:18.958 "trtype": "VFIOUSER", 00:17:18.958 "adrfam": "IPv4", 00:17:18.958 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:18.958 "trsvcid": "0" 00:17:18.958 } 00:17:18.958 ], 00:17:18.958 "allow_any_host": true, 00:17:18.958 "hosts": [], 00:17:18.958 "serial_number": "SPDK2", 00:17:18.958 "model_number": "SPDK bdev Controller", 00:17:18.958 "max_namespaces": 32, 00:17:18.958 "min_cntlid": 1, 00:17:18.958 "max_cntlid": 65519, 00:17:18.958 "namespaces": [ 00:17:18.958 { 00:17:18.958 "nsid": 1, 00:17:18.958 "bdev_name": "Malloc2", 00:17:18.958 "name": "Malloc2", 00:17:18.958 "nguid": "2C468F4A649D406AB9647A5ECE3152BC", 00:17:18.958 "uuid": "2c468f4a-649d-406a-b964-7a5ece3152bc" 00:17:18.958 } 00:17:18.958 ] 00:17:18.958 } 00:17:18.958 ] 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3899606 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:18.958 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:19.214 [2024-11-19 10:45:08.823619] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:19.214 Malloc4 00:17:19.214 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:19.470 [2024-11-19 10:45:09.089602] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:19.470 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:19.470 Asynchronous Event Request test 00:17:19.470 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:19.471 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:19.471 Registering asynchronous event callbacks... 00:17:19.471 Starting namespace attribute notice tests for all controllers... 00:17:19.471 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:19.471 aer_cb - Changed Namespace 00:17:19.471 Cleaning up... 00:17:19.727 [ 00:17:19.727 { 00:17:19.727 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:19.727 "subtype": "Discovery", 00:17:19.727 "listen_addresses": [], 00:17:19.727 "allow_any_host": true, 00:17:19.727 "hosts": [] 00:17:19.727 }, 00:17:19.727 { 00:17:19.727 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:19.727 "subtype": "NVMe", 00:17:19.727 "listen_addresses": [ 00:17:19.727 { 00:17:19.727 "trtype": "VFIOUSER", 00:17:19.727 "adrfam": "IPv4", 00:17:19.727 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:19.727 "trsvcid": "0" 00:17:19.727 } 00:17:19.727 ], 00:17:19.727 "allow_any_host": true, 00:17:19.727 "hosts": [], 00:17:19.727 "serial_number": "SPDK1", 00:17:19.727 "model_number": "SPDK bdev Controller", 00:17:19.727 "max_namespaces": 32, 00:17:19.727 "min_cntlid": 1, 00:17:19.727 "max_cntlid": 65519, 00:17:19.727 "namespaces": [ 00:17:19.727 { 00:17:19.727 "nsid": 1, 00:17:19.727 "bdev_name": "Malloc1", 00:17:19.727 "name": "Malloc1", 00:17:19.727 "nguid": "FA562994FF2842FA83BB8EF77DEF5DC5", 00:17:19.727 "uuid": "fa562994-ff28-42fa-83bb-8ef77def5dc5" 00:17:19.727 }, 00:17:19.727 { 00:17:19.727 "nsid": 2, 00:17:19.727 "bdev_name": "Malloc3", 00:17:19.727 "name": "Malloc3", 00:17:19.727 "nguid": "7CE6617C75E8484B9DFD4F4EA9065D83", 00:17:19.727 "uuid": "7ce6617c-75e8-484b-9dfd-4f4ea9065d83" 00:17:19.727 } 00:17:19.727 ] 00:17:19.727 }, 00:17:19.727 { 00:17:19.727 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:19.727 "subtype": "NVMe", 00:17:19.727 "listen_addresses": [ 00:17:19.727 { 00:17:19.727 "trtype": "VFIOUSER", 00:17:19.727 "adrfam": "IPv4", 00:17:19.727 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:19.727 "trsvcid": "0" 00:17:19.727 } 00:17:19.727 ], 00:17:19.727 "allow_any_host": true, 00:17:19.727 "hosts": [], 00:17:19.727 "serial_number": "SPDK2", 00:17:19.727 "model_number": "SPDK bdev Controller", 00:17:19.727 "max_namespaces": 32, 00:17:19.727 "min_cntlid": 1, 00:17:19.727 "max_cntlid": 65519, 00:17:19.727 "namespaces": [ 00:17:19.727 { 00:17:19.727 "nsid": 1, 00:17:19.727 "bdev_name": "Malloc2", 00:17:19.727 "name": "Malloc2", 00:17:19.727 "nguid": "2C468F4A649D406AB9647A5ECE3152BC", 00:17:19.727 "uuid": "2c468f4a-649d-406a-b964-7a5ece3152bc" 00:17:19.727 }, 00:17:19.727 { 00:17:19.727 "nsid": 2, 00:17:19.727 "bdev_name": "Malloc4", 00:17:19.727 "name": "Malloc4", 00:17:19.727 "nguid": "8A0E4E6E484542188B1028B4C1331FB0", 00:17:19.727 "uuid": "8a0e4e6e-4845-4218-8b10-28b4c1331fb0" 00:17:19.727 } 00:17:19.727 ] 00:17:19.727 } 00:17:19.727 ] 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3899606 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3891709 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3891709 ']' 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3891709 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3891709 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3891709' 00:17:19.727 killing process with pid 3891709 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3891709 00:17:19.727 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3891709 00:17:19.984 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:19.984 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:19.984 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:19.984 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:19.984 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:19.984 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3900004 00:17:19.984 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3900004' 00:17:19.984 Process pid: 3900004 00:17:19.984 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:19.984 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:19.984 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3900004 00:17:19.985 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3900004 ']' 00:17:19.985 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.985 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.985 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.985 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.985 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:19.985 [2024-11-19 10:45:09.657852] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:19.985 [2024-11-19 10:45:09.658748] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:17:19.985 [2024-11-19 10:45:09.658791] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.985 [2024-11-19 10:45:09.733199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.243 [2024-11-19 10:45:09.774419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.243 [2024-11-19 10:45:09.774456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.243 [2024-11-19 10:45:09.774464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.243 [2024-11-19 10:45:09.774474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.243 [2024-11-19 10:45:09.774479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.243 [2024-11-19 10:45:09.775998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.243 [2024-11-19 10:45:09.776082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.243 [2024-11-19 10:45:09.776188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.243 [2024-11-19 10:45:09.776188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.243 [2024-11-19 10:45:09.844395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:20.243 [2024-11-19 10:45:09.844975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:20.243 [2024-11-19 10:45:09.845304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:20.243 [2024-11-19 10:45:09.845596] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:20.243 [2024-11-19 10:45:09.845637] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:20.243 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.243 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:20.243 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:21.178 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:21.436 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:21.436 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:21.436 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:21.437 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:21.437 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:21.695 Malloc1 00:17:21.695 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:21.953 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:21.953 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:22.211 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:22.211 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:22.211 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:22.468 Malloc2 00:17:22.468 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:22.726 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:22.726 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3900004 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3900004 ']' 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3900004 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3900004 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3900004' 00:17:22.984 killing process with pid 3900004 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3900004 00:17:22.984 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3900004 00:17:23.244 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:23.244 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:23.244 00:17:23.244 real 0m50.822s 00:17:23.244 user 3m16.716s 00:17:23.244 sys 0m3.149s 00:17:23.244 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.244 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:23.244 ************************************ 00:17:23.244 END TEST nvmf_vfio_user 00:17:23.244 ************************************ 00:17:23.244 10:45:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:23.244 10:45:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.244 10:45:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.244 10:45:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.244 ************************************ 00:17:23.244 START TEST nvmf_vfio_user_nvme_compliance 00:17:23.244 ************************************ 00:17:23.244 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:23.503 * Looking for test storage... 00:17:23.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.503 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:23.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.504 --rc genhtml_branch_coverage=1 00:17:23.504 --rc genhtml_function_coverage=1 00:17:23.504 --rc genhtml_legend=1 00:17:23.504 --rc geninfo_all_blocks=1 00:17:23.504 --rc geninfo_unexecuted_blocks=1 00:17:23.504 00:17:23.504 ' 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:23.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.504 --rc genhtml_branch_coverage=1 00:17:23.504 --rc genhtml_function_coverage=1 00:17:23.504 --rc genhtml_legend=1 00:17:23.504 --rc geninfo_all_blocks=1 00:17:23.504 --rc geninfo_unexecuted_blocks=1 00:17:23.504 00:17:23.504 ' 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:23.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.504 --rc genhtml_branch_coverage=1 00:17:23.504 --rc genhtml_function_coverage=1 00:17:23.504 --rc genhtml_legend=1 00:17:23.504 --rc geninfo_all_blocks=1 00:17:23.504 --rc geninfo_unexecuted_blocks=1 00:17:23.504 00:17:23.504 ' 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:23.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.504 --rc genhtml_branch_coverage=1 00:17:23.504 --rc genhtml_function_coverage=1 00:17:23.504 --rc genhtml_legend=1 00:17:23.504 --rc geninfo_all_blocks=1 00:17:23.504 --rc geninfo_unexecuted_blocks=1 00:17:23.504 00:17:23.504 ' 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3900759 00:17:23.504 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3900759' 00:17:23.504 Process pid: 3900759 00:17:23.505 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:23.505 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3900759 00:17:23.505 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:23.505 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3900759 ']' 00:17:23.505 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.505 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.505 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.505 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.505 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:23.505 [2024-11-19 10:45:13.268874] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:17:23.505 [2024-11-19 10:45:13.268926] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.764 [2024-11-19 10:45:13.345574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:23.764 [2024-11-19 10:45:13.385079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.764 [2024-11-19 10:45:13.385114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.764 [2024-11-19 10:45:13.385122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.764 [2024-11-19 10:45:13.385128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.764 [2024-11-19 10:45:13.385133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.764 [2024-11-19 10:45:13.386562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.764 [2024-11-19 10:45:13.386668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.764 [2024-11-19 10:45:13.386667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.764 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.764 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:23.764 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:25.142 malloc0 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.142 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:25.142 00:17:25.142 00:17:25.142 CUnit - A unit testing framework for C - Version 2.1-3 00:17:25.142 http://cunit.sourceforge.net/ 00:17:25.142 00:17:25.142 00:17:25.142 Suite: nvme_compliance 00:17:25.142 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 10:45:14.729688] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.142 [2024-11-19 10:45:14.731035] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:25.142 [2024-11-19 10:45:14.731049] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:25.142 [2024-11-19 10:45:14.731056] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:25.142 [2024-11-19 10:45:14.732703] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:25.142 passed 00:17:25.142 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 10:45:14.811285] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.142 [2024-11-19 10:45:14.814301] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:25.142 passed 00:17:25.142 Test: admin_identify_ns ...[2024-11-19 10:45:14.893012] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.401 [2024-11-19 10:45:14.952213] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:25.401 [2024-11-19 10:45:14.960215] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:25.401 [2024-11-19 10:45:14.981314] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:25.401 passed 00:17:25.401 Test: admin_get_features_mandatory_features ...[2024-11-19 10:45:15.057930] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.401 [2024-11-19 10:45:15.060951] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:25.401 passed 00:17:25.401 Test: admin_get_features_optional_features ...[2024-11-19 10:45:15.137461] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.401 [2024-11-19 10:45:15.140477] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:25.401 passed 00:17:25.661 Test: admin_set_features_number_of_queues ...[2024-11-19 10:45:15.218340] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.661 [2024-11-19 10:45:15.325303] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:25.661 passed 00:17:25.661 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 10:45:15.400104] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.662 [2024-11-19 10:45:15.403120] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:25.662 passed 00:17:25.920 Test: admin_get_log_page_with_lpo ...[2024-11-19 10:45:15.478510] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.920 [2024-11-19 10:45:15.549215] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:25.920 [2024-11-19 10:45:15.562286] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:25.920 passed 00:17:25.920 Test: fabric_property_get ...[2024-11-19 10:45:15.634991] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.920 [2024-11-19 10:45:15.636228] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:25.920 [2024-11-19 10:45:15.638007] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:25.920 passed 00:17:26.178 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 10:45:15.715511] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:26.178 [2024-11-19 10:45:15.716741] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:26.178 [2024-11-19 10:45:15.718538] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:26.178 passed 00:17:26.178 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 10:45:15.794541] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:26.178 [2024-11-19 10:45:15.882218] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:26.178 [2024-11-19 10:45:15.898213] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:26.178 [2024-11-19 10:45:15.903288] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:26.178 passed 00:17:26.437 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 10:45:15.977125] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:26.437 [2024-11-19 10:45:15.978371] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:26.437 [2024-11-19 10:45:15.982162] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:26.437 passed 00:17:26.437 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 10:45:16.057846] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:26.437 [2024-11-19 10:45:16.133212] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:26.437 [2024-11-19 10:45:16.157210] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:26.437 [2024-11-19 10:45:16.162280] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:26.437 passed 00:17:26.695 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 10:45:16.238025] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:26.695 [2024-11-19 10:45:16.239266] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:26.695 [2024-11-19 10:45:16.239291] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:26.695 [2024-11-19 10:45:16.244067] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:26.695 passed 00:17:26.695 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 10:45:16.318787] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:26.695 [2024-11-19 10:45:16.414222] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:26.695 [2024-11-19 10:45:16.422212] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:26.695 [2024-11-19 10:45:16.430209] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:26.695 [2024-11-19 10:45:16.438210] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:26.695 [2024-11-19 10:45:16.467287] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:26.955 passed 00:17:26.955 Test: admin_create_io_sq_verify_pc ...[2024-11-19 10:45:16.543097] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:26.955 [2024-11-19 10:45:16.564216] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:26.955 [2024-11-19 10:45:16.581316] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:26.955 passed 00:17:26.955 Test: admin_create_io_qp_max_qps ...[2024-11-19 10:45:16.659845] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:28.331 [2024-11-19 10:45:17.770212] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:28.589 [2024-11-19 10:45:18.156726] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:28.589 passed 00:17:28.589 Test: admin_create_io_sq_shared_cq ...[2024-11-19 10:45:18.230445] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:28.589 [2024-11-19 10:45:18.366209] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:28.849 [2024-11-19 10:45:18.403267] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:28.849 passed 00:17:28.849 00:17:28.849 Run Summary: Type Total Ran Passed Failed Inactive 00:17:28.849 suites 1 1 n/a 0 0 00:17:28.849 tests 18 18 18 0 0 00:17:28.849 asserts 360 360 360 0 n/a 00:17:28.849 00:17:28.849 Elapsed time = 1.509 seconds 00:17:28.849 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3900759 00:17:28.849 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3900759 ']' 00:17:28.849 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3900759 00:17:28.849 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:28.849 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.849 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3900759 00:17:28.849 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.849 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.849 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3900759' 00:17:28.849 killing process with pid 3900759 00:17:28.849 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3900759 00:17:28.849 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3900759 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:29.107 00:17:29.107 real 0m5.668s 00:17:29.107 user 0m15.795s 00:17:29.107 sys 0m0.526s 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:29.107 ************************************ 00:17:29.107 END TEST nvmf_vfio_user_nvme_compliance 00:17:29.107 ************************************ 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:29.107 ************************************ 00:17:29.107 START TEST nvmf_vfio_user_fuzz 00:17:29.107 ************************************ 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:29.107 * Looking for test storage... 00:17:29.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:17:29.107 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:29.366 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:29.366 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.366 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.366 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.366 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.366 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.366 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.366 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.366 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:29.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.367 --rc genhtml_branch_coverage=1 00:17:29.367 --rc genhtml_function_coverage=1 00:17:29.367 --rc genhtml_legend=1 00:17:29.367 --rc geninfo_all_blocks=1 00:17:29.367 --rc geninfo_unexecuted_blocks=1 00:17:29.367 00:17:29.367 ' 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:29.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.367 --rc genhtml_branch_coverage=1 00:17:29.367 --rc genhtml_function_coverage=1 00:17:29.367 --rc genhtml_legend=1 00:17:29.367 --rc geninfo_all_blocks=1 00:17:29.367 --rc geninfo_unexecuted_blocks=1 00:17:29.367 00:17:29.367 ' 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:29.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.367 --rc genhtml_branch_coverage=1 00:17:29.367 --rc genhtml_function_coverage=1 00:17:29.367 --rc genhtml_legend=1 00:17:29.367 --rc geninfo_all_blocks=1 00:17:29.367 --rc geninfo_unexecuted_blocks=1 00:17:29.367 00:17:29.367 ' 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:29.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.367 --rc genhtml_branch_coverage=1 00:17:29.367 --rc genhtml_function_coverage=1 00:17:29.367 --rc genhtml_legend=1 00:17:29.367 --rc geninfo_all_blocks=1 00:17:29.367 --rc geninfo_unexecuted_blocks=1 00:17:29.367 00:17:29.367 ' 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3901812 00:17:29.367 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3901812' 00:17:29.367 Process pid: 3901812 00:17:29.368 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:29.368 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:29.368 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3901812 00:17:29.368 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3901812 ']' 00:17:29.368 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.368 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.368 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.368 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.368 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:29.625 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.625 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:29.625 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:30.560 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:30.561 malloc0 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:30.561 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:02.636 Fuzzing completed. Shutting down the fuzz application 00:18:02.636 00:18:02.636 Dumping successful admin opcodes: 00:18:02.636 8, 9, 10, 24, 00:18:02.636 Dumping successful io opcodes: 00:18:02.636 0, 00:18:02.636 NS: 0x20000081ef00 I/O qp, Total commands completed: 1146091, total successful commands: 4517, random_seed: 2631786240 00:18:02.636 NS: 0x20000081ef00 admin qp, Total commands completed: 283563, total successful commands: 2285, random_seed: 1667195904 00:18:02.636 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:02.636 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3901812 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3901812 ']' 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3901812 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3901812 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3901812' 00:18:02.637 killing process with pid 3901812 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3901812 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3901812 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:02.637 00:18:02.637 real 0m32.208s 00:18:02.637 user 0m34.434s 00:18:02.637 sys 0m26.898s 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:02.637 ************************************ 00:18:02.637 END TEST nvmf_vfio_user_fuzz 00:18:02.637 ************************************ 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.637 10:45:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.637 ************************************ 00:18:02.637 START TEST nvmf_auth_target 00:18:02.637 ************************************ 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:02.637 * Looking for test storage... 00:18:02.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.637 --rc genhtml_branch_coverage=1 00:18:02.637 --rc genhtml_function_coverage=1 00:18:02.637 --rc genhtml_legend=1 00:18:02.637 --rc geninfo_all_blocks=1 00:18:02.637 --rc geninfo_unexecuted_blocks=1 00:18:02.637 00:18:02.637 ' 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.637 --rc genhtml_branch_coverage=1 00:18:02.637 --rc genhtml_function_coverage=1 00:18:02.637 --rc genhtml_legend=1 00:18:02.637 --rc geninfo_all_blocks=1 00:18:02.637 --rc geninfo_unexecuted_blocks=1 00:18:02.637 00:18:02.637 ' 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.637 --rc genhtml_branch_coverage=1 00:18:02.637 --rc genhtml_function_coverage=1 00:18:02.637 --rc genhtml_legend=1 00:18:02.637 --rc geninfo_all_blocks=1 00:18:02.637 --rc geninfo_unexecuted_blocks=1 00:18:02.637 00:18:02.637 ' 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.637 --rc genhtml_branch_coverage=1 00:18:02.637 --rc genhtml_function_coverage=1 00:18:02.637 --rc genhtml_legend=1 00:18:02.637 --rc geninfo_all_blocks=1 00:18:02.637 --rc geninfo_unexecuted_blocks=1 00:18:02.637 00:18:02.637 ' 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.637 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:02.638 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:07.911 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.911 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:07.912 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:07.912 Found net devices under 0000:86:00.0: cvl_0_0 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:07.912 Found net devices under 0000:86:00.1: cvl_0_1 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.912 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:07.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:18:07.912 00:18:07.912 --- 10.0.0.2 ping statistics --- 00:18:07.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.912 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:18:07.912 00:18:07.912 --- 10.0.0.1 ping statistics --- 00:18:07.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.912 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3910121 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3910121 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3910121 ']' 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3910141 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=670d1df27dc53100ebbd201df34f7e6d09c3f55973275750 00:18:07.912 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.CGK 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 670d1df27dc53100ebbd201df34f7e6d09c3f55973275750 0 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 670d1df27dc53100ebbd201df34f7e6d09c3f55973275750 0 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=670d1df27dc53100ebbd201df34f7e6d09c3f55973275750 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.CGK 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.CGK 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.CGK 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=982a3f5a01499d7442e6443beb5dded4f9f9ed6138418766fd33bf76eebe19d0 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pYE 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 982a3f5a01499d7442e6443beb5dded4f9f9ed6138418766fd33bf76eebe19d0 3 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 982a3f5a01499d7442e6443beb5dded4f9f9ed6138418766fd33bf76eebe19d0 3 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=982a3f5a01499d7442e6443beb5dded4f9f9ed6138418766fd33bf76eebe19d0 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pYE 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pYE 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.pYE 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0df96c8c8ab552d55c718ab839e31c52 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dAB 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0df96c8c8ab552d55c718ab839e31c52 1 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0df96c8c8ab552d55c718ab839e31c52 1 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0df96c8c8ab552d55c718ab839e31c52 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dAB 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dAB 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.dAB 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6c8f80008ca64ac56e38708ec135345067898fb9f3b58a39 00:18:07.913 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Kmd 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6c8f80008ca64ac56e38708ec135345067898fb9f3b58a39 2 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6c8f80008ca64ac56e38708ec135345067898fb9f3b58a39 2 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6c8f80008ca64ac56e38708ec135345067898fb9f3b58a39 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Kmd 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Kmd 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Kmd 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=632db4f8ed6722a1fe3b8eb824a0a981b95b031f571a66f6 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.v8K 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 632db4f8ed6722a1fe3b8eb824a0a981b95b031f571a66f6 2 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 632db4f8ed6722a1fe3b8eb824a0a981b95b031f571a66f6 2 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=632db4f8ed6722a1fe3b8eb824a0a981b95b031f571a66f6 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.v8K 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.v8K 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.v8K 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=02e508bad19245d5e8d262e3a15cc6b2 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Z1q 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 02e508bad19245d5e8d262e3a15cc6b2 1 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 02e508bad19245d5e8d262e3a15cc6b2 1 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=02e508bad19245d5e8d262e3a15cc6b2 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Z1q 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Z1q 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Z1q 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=67520ab0797854689da8a75663a7e074bbac281699bc1a19a37115a23e9d6194 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.575 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 67520ab0797854689da8a75663a7e074bbac281699bc1a19a37115a23e9d6194 3 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 67520ab0797854689da8a75663a7e074bbac281699bc1a19a37115a23e9d6194 3 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=67520ab0797854689da8a75663a7e074bbac281699bc1a19a37115a23e9d6194 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.575 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.575 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.575 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3910121 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3910121 ']' 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.172 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.430 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.430 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:08.430 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3910141 /var/tmp/host.sock 00:18:08.430 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3910141 ']' 00:18:08.430 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:08.430 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.430 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:08.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:08.431 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.431 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CGK 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.CGK 00:18:08.689 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.CGK 00:18:08.947 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.pYE ]] 00:18:08.947 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pYE 00:18:08.948 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.948 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.948 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.948 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pYE 00:18:08.948 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pYE 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dAB 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dAB 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dAB 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Kmd ]] 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kmd 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kmd 00:18:09.205 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kmd 00:18:09.462 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:09.462 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.v8K 00:18:09.462 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.462 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.462 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.462 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.v8K 00:18:09.462 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.v8K 00:18:09.720 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Z1q ]] 00:18:09.720 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z1q 00:18:09.720 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.720 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.720 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.720 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z1q 00:18:09.720 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z1q 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.575 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.575 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.575 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:09.978 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.236 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.494 00:18:10.494 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.494 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.494 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.752 { 00:18:10.752 "cntlid": 1, 00:18:10.752 "qid": 0, 00:18:10.752 "state": "enabled", 00:18:10.752 "thread": "nvmf_tgt_poll_group_000", 00:18:10.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:10.752 "listen_address": { 00:18:10.752 "trtype": "TCP", 00:18:10.752 "adrfam": "IPv4", 00:18:10.752 "traddr": "10.0.0.2", 00:18:10.752 "trsvcid": "4420" 00:18:10.752 }, 00:18:10.752 "peer_address": { 00:18:10.752 "trtype": "TCP", 00:18:10.752 "adrfam": "IPv4", 00:18:10.752 "traddr": "10.0.0.1", 00:18:10.752 "trsvcid": "38946" 00:18:10.752 }, 00:18:10.752 "auth": { 00:18:10.752 "state": "completed", 00:18:10.752 "digest": "sha256", 00:18:10.752 "dhgroup": "null" 00:18:10.752 } 00:18:10.752 } 00:18:10.752 ]' 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.752 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.010 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:11.010 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:11.576 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.576 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.576 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.576 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.576 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.576 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.576 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:11.576 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:11.833 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:11.833 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.833 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:11.833 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:11.834 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.834 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.834 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.834 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.834 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.834 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.834 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.834 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.834 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.090 00:18:12.090 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.090 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.090 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.348 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.348 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.348 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.348 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.348 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.348 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.348 { 00:18:12.348 "cntlid": 3, 00:18:12.348 "qid": 0, 00:18:12.348 "state": "enabled", 00:18:12.348 "thread": "nvmf_tgt_poll_group_000", 00:18:12.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:12.348 "listen_address": { 00:18:12.348 "trtype": "TCP", 00:18:12.348 "adrfam": "IPv4", 00:18:12.348 "traddr": "10.0.0.2", 00:18:12.348 "trsvcid": "4420" 00:18:12.348 }, 00:18:12.348 "peer_address": { 00:18:12.348 "trtype": "TCP", 00:18:12.348 "adrfam": "IPv4", 00:18:12.348 "traddr": "10.0.0.1", 00:18:12.348 "trsvcid": "38964" 00:18:12.348 }, 00:18:12.348 "auth": { 00:18:12.348 "state": "completed", 00:18:12.348 "digest": "sha256", 00:18:12.348 "dhgroup": "null" 00:18:12.348 } 00:18:12.348 } 00:18:12.348 ]' 00:18:12.348 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.348 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.348 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.348 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:12.348 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.348 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.348 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.348 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.606 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:12.606 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:13.170 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.170 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:13.170 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.170 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.170 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.170 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.170 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:13.170 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.430 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.714 00:18:13.714 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.714 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.714 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.977 { 00:18:13.977 "cntlid": 5, 00:18:13.977 "qid": 0, 00:18:13.977 "state": "enabled", 00:18:13.977 "thread": "nvmf_tgt_poll_group_000", 00:18:13.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:13.977 "listen_address": { 00:18:13.977 "trtype": "TCP", 00:18:13.977 "adrfam": "IPv4", 00:18:13.977 "traddr": "10.0.0.2", 00:18:13.977 "trsvcid": "4420" 00:18:13.977 }, 00:18:13.977 "peer_address": { 00:18:13.977 "trtype": "TCP", 00:18:13.977 "adrfam": "IPv4", 00:18:13.977 "traddr": "10.0.0.1", 00:18:13.977 "trsvcid": "39000" 00:18:13.977 }, 00:18:13.977 "auth": { 00:18:13.977 "state": "completed", 00:18:13.977 "digest": "sha256", 00:18:13.977 "dhgroup": "null" 00:18:13.977 } 00:18:13.977 } 00:18:13.977 ]' 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.977 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.249 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:14.249 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:14.813 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.813 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:14.813 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.813 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.813 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.813 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.813 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:14.813 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.071 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.329 00:18:15.329 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.329 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.329 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.329 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.588 { 00:18:15.588 "cntlid": 7, 00:18:15.588 "qid": 0, 00:18:15.588 "state": "enabled", 00:18:15.588 "thread": "nvmf_tgt_poll_group_000", 00:18:15.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:15.588 "listen_address": { 00:18:15.588 "trtype": "TCP", 00:18:15.588 "adrfam": "IPv4", 00:18:15.588 "traddr": "10.0.0.2", 00:18:15.588 "trsvcid": "4420" 00:18:15.588 }, 00:18:15.588 "peer_address": { 00:18:15.588 "trtype": "TCP", 00:18:15.588 "adrfam": "IPv4", 00:18:15.588 "traddr": "10.0.0.1", 00:18:15.588 "trsvcid": "39044" 00:18:15.588 }, 00:18:15.588 "auth": { 00:18:15.588 "state": "completed", 00:18:15.588 "digest": "sha256", 00:18:15.588 "dhgroup": "null" 00:18:15.588 } 00:18:15.588 } 00:18:15.588 ]' 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.588 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.846 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:15.846 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:16.411 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.411 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.411 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.411 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.411 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.411 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.411 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.411 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:16.411 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.668 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.925 00:18:16.925 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.925 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.925 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.925 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.925 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.925 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.925 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.925 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.925 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.925 { 00:18:16.926 "cntlid": 9, 00:18:16.926 "qid": 0, 00:18:16.926 "state": "enabled", 00:18:16.926 "thread": "nvmf_tgt_poll_group_000", 00:18:16.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:16.926 "listen_address": { 00:18:16.926 "trtype": "TCP", 00:18:16.926 "adrfam": "IPv4", 00:18:16.926 "traddr": "10.0.0.2", 00:18:16.926 "trsvcid": "4420" 00:18:16.926 }, 00:18:16.926 "peer_address": { 00:18:16.926 "trtype": "TCP", 00:18:16.926 "adrfam": "IPv4", 00:18:16.926 "traddr": "10.0.0.1", 00:18:16.926 "trsvcid": "39068" 00:18:16.926 }, 00:18:16.926 "auth": { 00:18:16.926 "state": "completed", 00:18:16.926 "digest": "sha256", 00:18:16.926 "dhgroup": "ffdhe2048" 00:18:16.926 } 00:18:16.926 } 00:18:16.926 ]' 00:18:16.926 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.183 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.183 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.183 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.183 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.183 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.183 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.183 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.440 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:17.440 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.005 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.263 00:18:18.263 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.263 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.263 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.521 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.521 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.521 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.521 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.521 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.521 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.521 { 00:18:18.521 "cntlid": 11, 00:18:18.521 "qid": 0, 00:18:18.521 "state": "enabled", 00:18:18.521 "thread": "nvmf_tgt_poll_group_000", 00:18:18.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:18.521 "listen_address": { 00:18:18.521 "trtype": "TCP", 00:18:18.521 "adrfam": "IPv4", 00:18:18.521 "traddr": "10.0.0.2", 00:18:18.521 "trsvcid": "4420" 00:18:18.521 }, 00:18:18.521 "peer_address": { 00:18:18.521 "trtype": "TCP", 00:18:18.521 "adrfam": "IPv4", 00:18:18.521 "traddr": "10.0.0.1", 00:18:18.521 "trsvcid": "39086" 00:18:18.521 }, 00:18:18.521 "auth": { 00:18:18.521 "state": "completed", 00:18:18.521 "digest": "sha256", 00:18:18.521 "dhgroup": "ffdhe2048" 00:18:18.521 } 00:18:18.521 } 00:18:18.521 ]' 00:18:18.521 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.778 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.778 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.778 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.778 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.778 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.778 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.778 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.036 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:19.036 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.602 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.860 00:18:19.860 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.860 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.860 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.117 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.117 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.117 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.117 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.117 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.117 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.117 { 00:18:20.117 "cntlid": 13, 00:18:20.117 "qid": 0, 00:18:20.117 "state": "enabled", 00:18:20.117 "thread": "nvmf_tgt_poll_group_000", 00:18:20.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:20.117 "listen_address": { 00:18:20.117 "trtype": "TCP", 00:18:20.117 "adrfam": "IPv4", 00:18:20.117 "traddr": "10.0.0.2", 00:18:20.117 "trsvcid": "4420" 00:18:20.117 }, 00:18:20.117 "peer_address": { 00:18:20.117 "trtype": "TCP", 00:18:20.117 "adrfam": "IPv4", 00:18:20.117 "traddr": "10.0.0.1", 00:18:20.117 "trsvcid": "39106" 00:18:20.117 }, 00:18:20.117 "auth": { 00:18:20.117 "state": "completed", 00:18:20.117 "digest": "sha256", 00:18:20.117 "dhgroup": "ffdhe2048" 00:18:20.117 } 00:18:20.117 } 00:18:20.117 ]' 00:18:20.117 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.117 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.118 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.375 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.375 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.375 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.375 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.375 10:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.375 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:20.375 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:20.940 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.940 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:20.940 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.940 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.197 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.197 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.197 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.197 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.198 10:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.456 00:18:21.456 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.456 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.456 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.715 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.715 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.715 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.715 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.715 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.715 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.715 { 00:18:21.715 "cntlid": 15, 00:18:21.715 "qid": 0, 00:18:21.715 "state": "enabled", 00:18:21.715 "thread": "nvmf_tgt_poll_group_000", 00:18:21.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:21.715 "listen_address": { 00:18:21.715 "trtype": "TCP", 00:18:21.715 "adrfam": "IPv4", 00:18:21.715 "traddr": "10.0.0.2", 00:18:21.715 "trsvcid": "4420" 00:18:21.715 }, 00:18:21.715 "peer_address": { 00:18:21.715 "trtype": "TCP", 00:18:21.715 "adrfam": "IPv4", 00:18:21.715 "traddr": "10.0.0.1", 00:18:21.715 "trsvcid": "42466" 00:18:21.715 }, 00:18:21.715 "auth": { 00:18:21.715 "state": "completed", 00:18:21.715 "digest": "sha256", 00:18:21.715 "dhgroup": "ffdhe2048" 00:18:21.715 } 00:18:21.715 } 00:18:21.715 ]' 00:18:21.715 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.715 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.715 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.973 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.973 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.973 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.973 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.973 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.973 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:21.973 10:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:22.538 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.538 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:22.538 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.538 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.538 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.538 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.538 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.538 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.538 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.796 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.053 00:18:23.053 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.053 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.053 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.311 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.311 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.311 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.311 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.311 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.311 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.311 { 00:18:23.311 "cntlid": 17, 00:18:23.311 "qid": 0, 00:18:23.311 "state": "enabled", 00:18:23.311 "thread": "nvmf_tgt_poll_group_000", 00:18:23.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:23.311 "listen_address": { 00:18:23.311 "trtype": "TCP", 00:18:23.311 "adrfam": "IPv4", 00:18:23.311 "traddr": "10.0.0.2", 00:18:23.311 "trsvcid": "4420" 00:18:23.311 }, 00:18:23.311 "peer_address": { 00:18:23.311 "trtype": "TCP", 00:18:23.311 "adrfam": "IPv4", 00:18:23.311 "traddr": "10.0.0.1", 00:18:23.311 "trsvcid": "42482" 00:18:23.311 }, 00:18:23.311 "auth": { 00:18:23.311 "state": "completed", 00:18:23.311 "digest": "sha256", 00:18:23.311 "dhgroup": "ffdhe3072" 00:18:23.311 } 00:18:23.311 } 00:18:23.311 ]' 00:18:23.311 10:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.311 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.311 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.311 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:23.311 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.569 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.569 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.569 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.569 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:23.569 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:24.135 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.135 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:24.135 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.135 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.135 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.135 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.135 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:24.135 10:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.393 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.651 00:18:24.651 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.651 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.651 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.909 { 00:18:24.909 "cntlid": 19, 00:18:24.909 "qid": 0, 00:18:24.909 "state": "enabled", 00:18:24.909 "thread": "nvmf_tgt_poll_group_000", 00:18:24.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:24.909 "listen_address": { 00:18:24.909 "trtype": "TCP", 00:18:24.909 "adrfam": "IPv4", 00:18:24.909 "traddr": "10.0.0.2", 00:18:24.909 "trsvcid": "4420" 00:18:24.909 }, 00:18:24.909 "peer_address": { 00:18:24.909 "trtype": "TCP", 00:18:24.909 "adrfam": "IPv4", 00:18:24.909 "traddr": "10.0.0.1", 00:18:24.909 "trsvcid": "42520" 00:18:24.909 }, 00:18:24.909 "auth": { 00:18:24.909 "state": "completed", 00:18:24.909 "digest": "sha256", 00:18:24.909 "dhgroup": "ffdhe3072" 00:18:24.909 } 00:18:24.909 } 00:18:24.909 ]' 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.909 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.176 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:25.176 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:25.741 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.741 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:25.741 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.741 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.741 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.741 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.741 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.741 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.999 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.258 00:18:26.258 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.258 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.258 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.516 { 00:18:26.516 "cntlid": 21, 00:18:26.516 "qid": 0, 00:18:26.516 "state": "enabled", 00:18:26.516 "thread": "nvmf_tgt_poll_group_000", 00:18:26.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:26.516 "listen_address": { 00:18:26.516 "trtype": "TCP", 00:18:26.516 "adrfam": "IPv4", 00:18:26.516 "traddr": "10.0.0.2", 00:18:26.516 "trsvcid": "4420" 00:18:26.516 }, 00:18:26.516 "peer_address": { 00:18:26.516 "trtype": "TCP", 00:18:26.516 "adrfam": "IPv4", 00:18:26.516 "traddr": "10.0.0.1", 00:18:26.516 "trsvcid": "42542" 00:18:26.516 }, 00:18:26.516 "auth": { 00:18:26.516 "state": "completed", 00:18:26.516 "digest": "sha256", 00:18:26.516 "dhgroup": "ffdhe3072" 00:18:26.516 } 00:18:26.516 } 00:18:26.516 ]' 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.516 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.775 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:26.775 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:27.341 10:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.341 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:27.341 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.341 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.341 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.341 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.341 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:27.341 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.599 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.857 00:18:27.857 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.857 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.857 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.115 { 00:18:28.115 "cntlid": 23, 00:18:28.115 "qid": 0, 00:18:28.115 "state": "enabled", 00:18:28.115 "thread": "nvmf_tgt_poll_group_000", 00:18:28.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:28.115 "listen_address": { 00:18:28.115 "trtype": "TCP", 00:18:28.115 "adrfam": "IPv4", 00:18:28.115 "traddr": "10.0.0.2", 00:18:28.115 "trsvcid": "4420" 00:18:28.115 }, 00:18:28.115 "peer_address": { 00:18:28.115 "trtype": "TCP", 00:18:28.115 "adrfam": "IPv4", 00:18:28.115 "traddr": "10.0.0.1", 00:18:28.115 "trsvcid": "42568" 00:18:28.115 }, 00:18:28.115 "auth": { 00:18:28.115 "state": "completed", 00:18:28.115 "digest": "sha256", 00:18:28.115 "dhgroup": "ffdhe3072" 00:18:28.115 } 00:18:28.115 } 00:18:28.115 ]' 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.115 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.373 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:28.373 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:28.939 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.939 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:28.939 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.939 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.939 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.939 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.939 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.939 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.939 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.198 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.456 00:18:29.456 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.456 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.456 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.715 { 00:18:29.715 "cntlid": 25, 00:18:29.715 "qid": 0, 00:18:29.715 "state": "enabled", 00:18:29.715 "thread": "nvmf_tgt_poll_group_000", 00:18:29.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:29.715 "listen_address": { 00:18:29.715 "trtype": "TCP", 00:18:29.715 "adrfam": "IPv4", 00:18:29.715 "traddr": "10.0.0.2", 00:18:29.715 "trsvcid": "4420" 00:18:29.715 }, 00:18:29.715 "peer_address": { 00:18:29.715 "trtype": "TCP", 00:18:29.715 "adrfam": "IPv4", 00:18:29.715 "traddr": "10.0.0.1", 00:18:29.715 "trsvcid": "42582" 00:18:29.715 }, 00:18:29.715 "auth": { 00:18:29.715 "state": "completed", 00:18:29.715 "digest": "sha256", 00:18:29.715 "dhgroup": "ffdhe4096" 00:18:29.715 } 00:18:29.715 } 00:18:29.715 ]' 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.715 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.973 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:29.973 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:30.539 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.539 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:30.539 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.539 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.539 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.539 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.539 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.539 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.797 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.054 00:18:31.054 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.054 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.054 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.312 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.312 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.312 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.312 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.312 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.312 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.312 { 00:18:31.312 "cntlid": 27, 00:18:31.312 "qid": 0, 00:18:31.312 "state": "enabled", 00:18:31.312 "thread": "nvmf_tgt_poll_group_000", 00:18:31.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:31.312 "listen_address": { 00:18:31.312 "trtype": "TCP", 00:18:31.312 "adrfam": "IPv4", 00:18:31.312 "traddr": "10.0.0.2", 00:18:31.312 "trsvcid": "4420" 00:18:31.312 }, 00:18:31.312 "peer_address": { 00:18:31.312 "trtype": "TCP", 00:18:31.312 "adrfam": "IPv4", 00:18:31.312 "traddr": "10.0.0.1", 00:18:31.312 "trsvcid": "33702" 00:18:31.312 }, 00:18:31.312 "auth": { 00:18:31.312 "state": "completed", 00:18:31.312 "digest": "sha256", 00:18:31.312 "dhgroup": "ffdhe4096" 00:18:31.312 } 00:18:31.312 } 00:18:31.312 ]' 00:18:31.312 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.312 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.312 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.312 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.312 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.312 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.313 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.313 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.570 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:31.570 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:32.137 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.137 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.137 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.137 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.137 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.137 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.137 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:32.137 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.395 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.652 00:18:32.652 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.652 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.652 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.910 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.910 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.910 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.910 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.910 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.910 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.910 { 00:18:32.910 "cntlid": 29, 00:18:32.910 "qid": 0, 00:18:32.910 "state": "enabled", 00:18:32.910 "thread": "nvmf_tgt_poll_group_000", 00:18:32.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:32.910 "listen_address": { 00:18:32.910 "trtype": "TCP", 00:18:32.910 "adrfam": "IPv4", 00:18:32.910 "traddr": "10.0.0.2", 00:18:32.910 "trsvcid": "4420" 00:18:32.910 }, 00:18:32.910 "peer_address": { 00:18:32.910 "trtype": "TCP", 00:18:32.910 "adrfam": "IPv4", 00:18:32.910 "traddr": "10.0.0.1", 00:18:32.910 "trsvcid": "33714" 00:18:32.910 }, 00:18:32.910 "auth": { 00:18:32.910 "state": "completed", 00:18:32.910 "digest": "sha256", 00:18:32.910 "dhgroup": "ffdhe4096" 00:18:32.910 } 00:18:32.910 } 00:18:32.910 ]' 00:18:32.910 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.910 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.911 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.911 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:32.911 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.911 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.911 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.911 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.168 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:33.168 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:33.734 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.734 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:33.734 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.734 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.734 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.734 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.734 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.734 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.992 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.249 00:18:34.249 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.249 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.249 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.506 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.506 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.506 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.506 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.506 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.506 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.506 { 00:18:34.506 "cntlid": 31, 00:18:34.506 "qid": 0, 00:18:34.506 "state": "enabled", 00:18:34.506 "thread": "nvmf_tgt_poll_group_000", 00:18:34.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:34.506 "listen_address": { 00:18:34.506 "trtype": "TCP", 00:18:34.506 "adrfam": "IPv4", 00:18:34.506 "traddr": "10.0.0.2", 00:18:34.506 "trsvcid": "4420" 00:18:34.506 }, 00:18:34.506 "peer_address": { 00:18:34.506 "trtype": "TCP", 00:18:34.506 "adrfam": "IPv4", 00:18:34.506 "traddr": "10.0.0.1", 00:18:34.506 "trsvcid": "33748" 00:18:34.506 }, 00:18:34.506 "auth": { 00:18:34.506 "state": "completed", 00:18:34.506 "digest": "sha256", 00:18:34.506 "dhgroup": "ffdhe4096" 00:18:34.507 } 00:18:34.507 } 00:18:34.507 ]' 00:18:34.507 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.507 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.507 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.507 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.507 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.507 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.507 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.507 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.878 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:34.878 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:35.150 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.150 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:35.150 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.150 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.408 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.408 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.408 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.408 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:35.408 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.408 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.973 00:18:35.973 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.973 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.973 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.973 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.973 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.973 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.973 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.973 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.973 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.973 { 00:18:35.973 "cntlid": 33, 00:18:35.973 "qid": 0, 00:18:35.973 "state": "enabled", 00:18:35.973 "thread": "nvmf_tgt_poll_group_000", 00:18:35.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:35.973 "listen_address": { 00:18:35.973 "trtype": "TCP", 00:18:35.974 "adrfam": "IPv4", 00:18:35.974 "traddr": "10.0.0.2", 00:18:35.974 "trsvcid": "4420" 00:18:35.974 }, 00:18:35.974 "peer_address": { 00:18:35.974 "trtype": "TCP", 00:18:35.974 "adrfam": "IPv4", 00:18:35.974 "traddr": "10.0.0.1", 00:18:35.974 "trsvcid": "33776" 00:18:35.974 }, 00:18:35.974 "auth": { 00:18:35.974 "state": "completed", 00:18:35.974 "digest": "sha256", 00:18:35.974 "dhgroup": "ffdhe6144" 00:18:35.974 } 00:18:35.974 } 00:18:35.974 ]' 00:18:35.974 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.974 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.974 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.231 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:36.231 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.231 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.231 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.231 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.231 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:36.231 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:36.796 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.054 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.618 00:18:37.618 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.618 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.618 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.618 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.618 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.618 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.618 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.618 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.618 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.618 { 00:18:37.618 "cntlid": 35, 00:18:37.618 "qid": 0, 00:18:37.618 "state": "enabled", 00:18:37.618 "thread": "nvmf_tgt_poll_group_000", 00:18:37.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:37.618 "listen_address": { 00:18:37.619 "trtype": "TCP", 00:18:37.619 "adrfam": "IPv4", 00:18:37.619 "traddr": "10.0.0.2", 00:18:37.619 "trsvcid": "4420" 00:18:37.619 }, 00:18:37.619 "peer_address": { 00:18:37.619 "trtype": "TCP", 00:18:37.619 "adrfam": "IPv4", 00:18:37.619 "traddr": "10.0.0.1", 00:18:37.619 "trsvcid": "33802" 00:18:37.619 }, 00:18:37.619 "auth": { 00:18:37.619 "state": "completed", 00:18:37.619 "digest": "sha256", 00:18:37.619 "dhgroup": "ffdhe6144" 00:18:37.619 } 00:18:37.619 } 00:18:37.619 ]' 00:18:37.619 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.619 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.619 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.880 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.880 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.880 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.880 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.880 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.880 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:37.880 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:38.444 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.702 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.268 00:18:39.268 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.268 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.268 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.268 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.269 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.269 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.269 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.269 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.269 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.269 { 00:18:39.269 "cntlid": 37, 00:18:39.269 "qid": 0, 00:18:39.269 "state": "enabled", 00:18:39.269 "thread": "nvmf_tgt_poll_group_000", 00:18:39.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:39.269 "listen_address": { 00:18:39.269 "trtype": "TCP", 00:18:39.269 "adrfam": "IPv4", 00:18:39.269 "traddr": "10.0.0.2", 00:18:39.269 "trsvcid": "4420" 00:18:39.269 }, 00:18:39.269 "peer_address": { 00:18:39.269 "trtype": "TCP", 00:18:39.269 "adrfam": "IPv4", 00:18:39.269 "traddr": "10.0.0.1", 00:18:39.269 "trsvcid": "33834" 00:18:39.269 }, 00:18:39.269 "auth": { 00:18:39.269 "state": "completed", 00:18:39.269 "digest": "sha256", 00:18:39.269 "dhgroup": "ffdhe6144" 00:18:39.269 } 00:18:39.269 } 00:18:39.269 ]' 00:18:39.269 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.269 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.269 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.527 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:39.527 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.527 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.527 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.527 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.786 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:39.786 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:40.352 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.352 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:40.352 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.352 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.352 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.352 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.352 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:40.352 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.610 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.867 00:18:40.867 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.867 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.867 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.125 { 00:18:41.125 "cntlid": 39, 00:18:41.125 "qid": 0, 00:18:41.125 "state": "enabled", 00:18:41.125 "thread": "nvmf_tgt_poll_group_000", 00:18:41.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:41.125 "listen_address": { 00:18:41.125 "trtype": "TCP", 00:18:41.125 "adrfam": "IPv4", 00:18:41.125 "traddr": "10.0.0.2", 00:18:41.125 "trsvcid": "4420" 00:18:41.125 }, 00:18:41.125 "peer_address": { 00:18:41.125 "trtype": "TCP", 00:18:41.125 "adrfam": "IPv4", 00:18:41.125 "traddr": "10.0.0.1", 00:18:41.125 "trsvcid": "49828" 00:18:41.125 }, 00:18:41.125 "auth": { 00:18:41.125 "state": "completed", 00:18:41.125 "digest": "sha256", 00:18:41.125 "dhgroup": "ffdhe6144" 00:18:41.125 } 00:18:41.125 } 00:18:41.125 ]' 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.125 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.383 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:41.383 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:41.948 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.948 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:41.948 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.948 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.948 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.948 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.948 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.948 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.948 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.206 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.771 00:18:42.771 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.771 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.771 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.771 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.771 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.771 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.771 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.771 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.029 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.029 { 00:18:43.029 "cntlid": 41, 00:18:43.029 "qid": 0, 00:18:43.029 "state": "enabled", 00:18:43.029 "thread": "nvmf_tgt_poll_group_000", 00:18:43.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:43.029 "listen_address": { 00:18:43.029 "trtype": "TCP", 00:18:43.029 "adrfam": "IPv4", 00:18:43.029 "traddr": "10.0.0.2", 00:18:43.029 "trsvcid": "4420" 00:18:43.029 }, 00:18:43.029 "peer_address": { 00:18:43.029 "trtype": "TCP", 00:18:43.029 "adrfam": "IPv4", 00:18:43.029 "traddr": "10.0.0.1", 00:18:43.029 "trsvcid": "49852" 00:18:43.029 }, 00:18:43.029 "auth": { 00:18:43.029 "state": "completed", 00:18:43.029 "digest": "sha256", 00:18:43.029 "dhgroup": "ffdhe8192" 00:18:43.029 } 00:18:43.029 } 00:18:43.029 ]' 00:18:43.029 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.029 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.029 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.029 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.029 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.029 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.029 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.029 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.286 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:43.286 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.851 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.414 00:18:44.414 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.414 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.414 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.672 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.672 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.672 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.672 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.672 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.672 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.672 { 00:18:44.672 "cntlid": 43, 00:18:44.672 "qid": 0, 00:18:44.672 "state": "enabled", 00:18:44.672 "thread": "nvmf_tgt_poll_group_000", 00:18:44.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:44.672 "listen_address": { 00:18:44.672 "trtype": "TCP", 00:18:44.672 "adrfam": "IPv4", 00:18:44.672 "traddr": "10.0.0.2", 00:18:44.672 "trsvcid": "4420" 00:18:44.672 }, 00:18:44.672 "peer_address": { 00:18:44.672 "trtype": "TCP", 00:18:44.672 "adrfam": "IPv4", 00:18:44.672 "traddr": "10.0.0.1", 00:18:44.673 "trsvcid": "49872" 00:18:44.673 }, 00:18:44.673 "auth": { 00:18:44.673 "state": "completed", 00:18:44.673 "digest": "sha256", 00:18:44.673 "dhgroup": "ffdhe8192" 00:18:44.673 } 00:18:44.673 } 00:18:44.673 ]' 00:18:44.673 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.673 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.673 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.673 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:44.673 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.930 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.930 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.930 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.930 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:44.930 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:45.495 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.495 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.495 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.495 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.495 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.495 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.495 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.495 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.753 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.754 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.319 00:18:46.319 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.319 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.319 10:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.576 { 00:18:46.576 "cntlid": 45, 00:18:46.576 "qid": 0, 00:18:46.576 "state": "enabled", 00:18:46.576 "thread": "nvmf_tgt_poll_group_000", 00:18:46.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:46.576 "listen_address": { 00:18:46.576 "trtype": "TCP", 00:18:46.576 "adrfam": "IPv4", 00:18:46.576 "traddr": "10.0.0.2", 00:18:46.576 "trsvcid": "4420" 00:18:46.576 }, 00:18:46.576 "peer_address": { 00:18:46.576 "trtype": "TCP", 00:18:46.576 "adrfam": "IPv4", 00:18:46.576 "traddr": "10.0.0.1", 00:18:46.576 "trsvcid": "49890" 00:18:46.576 }, 00:18:46.576 "auth": { 00:18:46.576 "state": "completed", 00:18:46.576 "digest": "sha256", 00:18:46.576 "dhgroup": "ffdhe8192" 00:18:46.576 } 00:18:46.576 } 00:18:46.576 ]' 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.576 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.834 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:46.834 10:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:47.399 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.399 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:47.399 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.399 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.399 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.399 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.399 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.399 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.657 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.221 00:18:48.221 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.221 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.221 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.221 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.221 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.221 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.221 10:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.221 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.221 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.221 { 00:18:48.221 "cntlid": 47, 00:18:48.221 "qid": 0, 00:18:48.221 "state": "enabled", 00:18:48.222 "thread": "nvmf_tgt_poll_group_000", 00:18:48.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:48.222 "listen_address": { 00:18:48.222 "trtype": "TCP", 00:18:48.222 "adrfam": "IPv4", 00:18:48.222 "traddr": "10.0.0.2", 00:18:48.222 "trsvcid": "4420" 00:18:48.222 }, 00:18:48.222 "peer_address": { 00:18:48.222 "trtype": "TCP", 00:18:48.222 "adrfam": "IPv4", 00:18:48.222 "traddr": "10.0.0.1", 00:18:48.222 "trsvcid": "49920" 00:18:48.222 }, 00:18:48.222 "auth": { 00:18:48.222 "state": "completed", 00:18:48.222 "digest": "sha256", 00:18:48.222 "dhgroup": "ffdhe8192" 00:18:48.222 } 00:18:48.222 } 00:18:48.222 ]' 00:18:48.222 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.479 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.479 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.479 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.479 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.479 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.479 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.479 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.737 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:48.737 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:49.302 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.303 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:49.303 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.303 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.303 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.303 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:49.303 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.303 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.303 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:49.303 10:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.560 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.560 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.818 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.818 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.818 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.818 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.818 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.818 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.818 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.818 { 00:18:49.818 "cntlid": 49, 00:18:49.818 "qid": 0, 00:18:49.818 "state": "enabled", 00:18:49.818 "thread": "nvmf_tgt_poll_group_000", 00:18:49.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:49.818 "listen_address": { 00:18:49.818 "trtype": "TCP", 00:18:49.818 "adrfam": "IPv4", 00:18:49.818 "traddr": "10.0.0.2", 00:18:49.818 "trsvcid": "4420" 00:18:49.818 }, 00:18:49.818 "peer_address": { 00:18:49.818 "trtype": "TCP", 00:18:49.818 "adrfam": "IPv4", 00:18:49.818 "traddr": "10.0.0.1", 00:18:49.818 "trsvcid": "49948" 00:18:49.818 }, 00:18:49.818 "auth": { 00:18:49.818 "state": "completed", 00:18:49.818 "digest": "sha384", 00:18:49.818 "dhgroup": "null" 00:18:49.818 } 00:18:49.818 } 00:18:49.818 ]' 00:18:49.818 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.076 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.076 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.076 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:50.076 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.076 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.076 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.076 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.333 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:50.333 10:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:50.898 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.898 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:50.898 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.898 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.898 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.898 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.898 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:50.898 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:50.898 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:50.898 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.898 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:50.899 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:50.899 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:50.899 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.899 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.899 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.899 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.899 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.899 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.899 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.899 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.156 00:18:51.156 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.156 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.156 10:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.415 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.415 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.415 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.415 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.415 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.415 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.415 { 00:18:51.415 "cntlid": 51, 00:18:51.415 "qid": 0, 00:18:51.415 "state": "enabled", 00:18:51.415 "thread": "nvmf_tgt_poll_group_000", 00:18:51.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:51.415 "listen_address": { 00:18:51.415 "trtype": "TCP", 00:18:51.415 "adrfam": "IPv4", 00:18:51.415 "traddr": "10.0.0.2", 00:18:51.415 "trsvcid": "4420" 00:18:51.415 }, 00:18:51.415 "peer_address": { 00:18:51.415 "trtype": "TCP", 00:18:51.415 "adrfam": "IPv4", 00:18:51.415 "traddr": "10.0.0.1", 00:18:51.415 "trsvcid": "49728" 00:18:51.415 }, 00:18:51.415 "auth": { 00:18:51.415 "state": "completed", 00:18:51.415 "digest": "sha384", 00:18:51.415 "dhgroup": "null" 00:18:51.415 } 00:18:51.415 } 00:18:51.415 ]' 00:18:51.415 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.415 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.415 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.415 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:51.415 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.673 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.673 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.673 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.673 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:51.673 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:52.238 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.238 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:52.238 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.238 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.238 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.238 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.238 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:52.238 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.496 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.754 00:18:52.754 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.754 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.754 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.012 { 00:18:53.012 "cntlid": 53, 00:18:53.012 "qid": 0, 00:18:53.012 "state": "enabled", 00:18:53.012 "thread": "nvmf_tgt_poll_group_000", 00:18:53.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:53.012 "listen_address": { 00:18:53.012 "trtype": "TCP", 00:18:53.012 "adrfam": "IPv4", 00:18:53.012 "traddr": "10.0.0.2", 00:18:53.012 "trsvcid": "4420" 00:18:53.012 }, 00:18:53.012 "peer_address": { 00:18:53.012 "trtype": "TCP", 00:18:53.012 "adrfam": "IPv4", 00:18:53.012 "traddr": "10.0.0.1", 00:18:53.012 "trsvcid": "49754" 00:18:53.012 }, 00:18:53.012 "auth": { 00:18:53.012 "state": "completed", 00:18:53.012 "digest": "sha384", 00:18:53.012 "dhgroup": "null" 00:18:53.012 } 00:18:53.012 } 00:18:53.012 ]' 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.012 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.269 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:53.270 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:53.835 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.835 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:53.835 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.835 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.835 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.835 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.835 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:53.835 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.093 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.351 00:18:54.351 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.351 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.351 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.609 { 00:18:54.609 "cntlid": 55, 00:18:54.609 "qid": 0, 00:18:54.609 "state": "enabled", 00:18:54.609 "thread": "nvmf_tgt_poll_group_000", 00:18:54.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:54.609 "listen_address": { 00:18:54.609 "trtype": "TCP", 00:18:54.609 "adrfam": "IPv4", 00:18:54.609 "traddr": "10.0.0.2", 00:18:54.609 "trsvcid": "4420" 00:18:54.609 }, 00:18:54.609 "peer_address": { 00:18:54.609 "trtype": "TCP", 00:18:54.609 "adrfam": "IPv4", 00:18:54.609 "traddr": "10.0.0.1", 00:18:54.609 "trsvcid": "49784" 00:18:54.609 }, 00:18:54.609 "auth": { 00:18:54.609 "state": "completed", 00:18:54.609 "digest": "sha384", 00:18:54.609 "dhgroup": "null" 00:18:54.609 } 00:18:54.609 } 00:18:54.609 ]' 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.609 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.867 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:54.867 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:18:55.432 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.432 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:55.432 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.432 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.432 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.432 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.432 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.432 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:55.432 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.689 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.947 00:18:55.947 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.947 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.947 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.205 { 00:18:56.205 "cntlid": 57, 00:18:56.205 "qid": 0, 00:18:56.205 "state": "enabled", 00:18:56.205 "thread": "nvmf_tgt_poll_group_000", 00:18:56.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:56.205 "listen_address": { 00:18:56.205 "trtype": "TCP", 00:18:56.205 "adrfam": "IPv4", 00:18:56.205 "traddr": "10.0.0.2", 00:18:56.205 "trsvcid": "4420" 00:18:56.205 }, 00:18:56.205 "peer_address": { 00:18:56.205 "trtype": "TCP", 00:18:56.205 "adrfam": "IPv4", 00:18:56.205 "traddr": "10.0.0.1", 00:18:56.205 "trsvcid": "49814" 00:18:56.205 }, 00:18:56.205 "auth": { 00:18:56.205 "state": "completed", 00:18:56.205 "digest": "sha384", 00:18:56.205 "dhgroup": "ffdhe2048" 00:18:56.205 } 00:18:56.205 } 00:18:56.205 ]' 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.205 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.462 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:56.462 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:18:57.028 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.028 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:57.028 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.028 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.028 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.028 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.028 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:57.028 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.286 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.543 00:18:57.544 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.544 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.544 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.544 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.544 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.544 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.544 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.544 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.544 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.544 { 00:18:57.544 "cntlid": 59, 00:18:57.544 "qid": 0, 00:18:57.544 "state": "enabled", 00:18:57.544 "thread": "nvmf_tgt_poll_group_000", 00:18:57.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:57.544 "listen_address": { 00:18:57.544 "trtype": "TCP", 00:18:57.544 "adrfam": "IPv4", 00:18:57.544 "traddr": "10.0.0.2", 00:18:57.544 "trsvcid": "4420" 00:18:57.544 }, 00:18:57.544 "peer_address": { 00:18:57.544 "trtype": "TCP", 00:18:57.544 "adrfam": "IPv4", 00:18:57.544 "traddr": "10.0.0.1", 00:18:57.544 "trsvcid": "49850" 00:18:57.544 }, 00:18:57.544 "auth": { 00:18:57.544 "state": "completed", 00:18:57.544 "digest": "sha384", 00:18:57.544 "dhgroup": "ffdhe2048" 00:18:57.544 } 00:18:57.544 } 00:18:57.544 ]' 00:18:57.544 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.801 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.801 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.801 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.801 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.801 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.801 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.801 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.059 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:58.059 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:18:58.623 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.623 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:58.623 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.623 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.623 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.623 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.624 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.624 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.881 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.138 00:18:59.138 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.138 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.138 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.138 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.138 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.138 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.138 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.138 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.138 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.138 { 00:18:59.138 "cntlid": 61, 00:18:59.138 "qid": 0, 00:18:59.138 "state": "enabled", 00:18:59.138 "thread": "nvmf_tgt_poll_group_000", 00:18:59.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:59.138 "listen_address": { 00:18:59.138 "trtype": "TCP", 00:18:59.138 "adrfam": "IPv4", 00:18:59.138 "traddr": "10.0.0.2", 00:18:59.138 "trsvcid": "4420" 00:18:59.138 }, 00:18:59.138 "peer_address": { 00:18:59.138 "trtype": "TCP", 00:18:59.138 "adrfam": "IPv4", 00:18:59.138 "traddr": "10.0.0.1", 00:18:59.138 "trsvcid": "49864" 00:18:59.138 }, 00:18:59.138 "auth": { 00:18:59.138 "state": "completed", 00:18:59.138 "digest": "sha384", 00:18:59.138 "dhgroup": "ffdhe2048" 00:18:59.138 } 00:18:59.138 } 00:18:59.138 ]' 00:18:59.138 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.395 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.395 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.395 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.395 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.395 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.395 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.395 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.651 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:18:59.651 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.215 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.473 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.473 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.473 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.473 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.731 00:19:00.731 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.731 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.731 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.731 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.731 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.731 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.731 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.731 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.731 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.731 { 00:19:00.731 "cntlid": 63, 00:19:00.731 "qid": 0, 00:19:00.731 "state": "enabled", 00:19:00.731 "thread": "nvmf_tgt_poll_group_000", 00:19:00.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:00.731 "listen_address": { 00:19:00.731 "trtype": "TCP", 00:19:00.731 "adrfam": "IPv4", 00:19:00.731 "traddr": "10.0.0.2", 00:19:00.731 "trsvcid": "4420" 00:19:00.731 }, 00:19:00.731 "peer_address": { 00:19:00.731 "trtype": "TCP", 00:19:00.731 "adrfam": "IPv4", 00:19:00.731 "traddr": "10.0.0.1", 00:19:00.731 "trsvcid": "53466" 00:19:00.731 }, 00:19:00.731 "auth": { 00:19:00.731 "state": "completed", 00:19:00.731 "digest": "sha384", 00:19:00.731 "dhgroup": "ffdhe2048" 00:19:00.731 } 00:19:00.731 } 00:19:00.731 ]' 00:19:00.731 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.988 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.988 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.988 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.988 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.988 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.989 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.989 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.244 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:01.244 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:01.810 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.810 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:01.810 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.810 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.810 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.810 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.810 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.810 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.811 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.068 00:19:02.068 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.068 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.068 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.326 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.326 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.326 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.326 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.326 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.326 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.326 { 00:19:02.326 "cntlid": 65, 00:19:02.326 "qid": 0, 00:19:02.326 "state": "enabled", 00:19:02.326 "thread": "nvmf_tgt_poll_group_000", 00:19:02.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:02.326 "listen_address": { 00:19:02.326 "trtype": "TCP", 00:19:02.326 "adrfam": "IPv4", 00:19:02.326 "traddr": "10.0.0.2", 00:19:02.326 "trsvcid": "4420" 00:19:02.326 }, 00:19:02.326 "peer_address": { 00:19:02.326 "trtype": "TCP", 00:19:02.326 "adrfam": "IPv4", 00:19:02.326 "traddr": "10.0.0.1", 00:19:02.326 "trsvcid": "53496" 00:19:02.326 }, 00:19:02.326 "auth": { 00:19:02.326 "state": "completed", 00:19:02.326 "digest": "sha384", 00:19:02.326 "dhgroup": "ffdhe3072" 00:19:02.326 } 00:19:02.326 } 00:19:02.326 ]' 00:19:02.326 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.326 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.326 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.583 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.583 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.583 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.584 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.584 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.584 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:02.584 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:03.149 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.406 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:03.406 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.406 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.406 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.406 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.406 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:03.406 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.406 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.664 00:19:03.664 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.664 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.664 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.922 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.922 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.922 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.922 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.922 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.922 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.922 { 00:19:03.922 "cntlid": 67, 00:19:03.922 "qid": 0, 00:19:03.922 "state": "enabled", 00:19:03.922 "thread": "nvmf_tgt_poll_group_000", 00:19:03.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:03.922 "listen_address": { 00:19:03.922 "trtype": "TCP", 00:19:03.922 "adrfam": "IPv4", 00:19:03.922 "traddr": "10.0.0.2", 00:19:03.922 "trsvcid": "4420" 00:19:03.922 }, 00:19:03.922 "peer_address": { 00:19:03.922 "trtype": "TCP", 00:19:03.922 "adrfam": "IPv4", 00:19:03.922 "traddr": "10.0.0.1", 00:19:03.922 "trsvcid": "53526" 00:19:03.922 }, 00:19:03.922 "auth": { 00:19:03.922 "state": "completed", 00:19:03.922 "digest": "sha384", 00:19:03.922 "dhgroup": "ffdhe3072" 00:19:03.922 } 00:19:03.922 } 00:19:03.922 ]' 00:19:03.922 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.180 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.180 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.180 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:04.180 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.180 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.180 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.180 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.438 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:04.438 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.004 10:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.262 00:19:05.520 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.520 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.520 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.520 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.520 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.520 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.520 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.520 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.520 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.520 { 00:19:05.520 "cntlid": 69, 00:19:05.520 "qid": 0, 00:19:05.520 "state": "enabled", 00:19:05.520 "thread": "nvmf_tgt_poll_group_000", 00:19:05.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:05.520 "listen_address": { 00:19:05.520 "trtype": "TCP", 00:19:05.520 "adrfam": "IPv4", 00:19:05.520 "traddr": "10.0.0.2", 00:19:05.520 "trsvcid": "4420" 00:19:05.520 }, 00:19:05.520 "peer_address": { 00:19:05.520 "trtype": "TCP", 00:19:05.520 "adrfam": "IPv4", 00:19:05.520 "traddr": "10.0.0.1", 00:19:05.520 "trsvcid": "53544" 00:19:05.520 }, 00:19:05.520 "auth": { 00:19:05.520 "state": "completed", 00:19:05.520 "digest": "sha384", 00:19:05.520 "dhgroup": "ffdhe3072" 00:19:05.520 } 00:19:05.520 } 00:19:05.520 ]' 00:19:05.520 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.777 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.777 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.777 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.777 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.777 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.777 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.778 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.035 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:06.035 10:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.601 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.860 00:19:06.860 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.860 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.860 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.118 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.118 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.118 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.118 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.118 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.118 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.118 { 00:19:07.118 "cntlid": 71, 00:19:07.118 "qid": 0, 00:19:07.118 "state": "enabled", 00:19:07.118 "thread": "nvmf_tgt_poll_group_000", 00:19:07.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:07.118 "listen_address": { 00:19:07.118 "trtype": "TCP", 00:19:07.118 "adrfam": "IPv4", 00:19:07.118 "traddr": "10.0.0.2", 00:19:07.118 "trsvcid": "4420" 00:19:07.118 }, 00:19:07.118 "peer_address": { 00:19:07.118 "trtype": "TCP", 00:19:07.118 "adrfam": "IPv4", 00:19:07.118 "traddr": "10.0.0.1", 00:19:07.118 "trsvcid": "53568" 00:19:07.118 }, 00:19:07.118 "auth": { 00:19:07.118 "state": "completed", 00:19:07.118 "digest": "sha384", 00:19:07.118 "dhgroup": "ffdhe3072" 00:19:07.118 } 00:19:07.118 } 00:19:07.118 ]' 00:19:07.118 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.118 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.118 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.376 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.376 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.376 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.376 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.376 10:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.376 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:07.376 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:07.941 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.200 10:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.457 00:19:08.457 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.457 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.457 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.714 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.714 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.714 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.714 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.714 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.714 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.714 { 00:19:08.714 "cntlid": 73, 00:19:08.714 "qid": 0, 00:19:08.714 "state": "enabled", 00:19:08.714 "thread": "nvmf_tgt_poll_group_000", 00:19:08.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:08.714 "listen_address": { 00:19:08.714 "trtype": "TCP", 00:19:08.714 "adrfam": "IPv4", 00:19:08.714 "traddr": "10.0.0.2", 00:19:08.714 "trsvcid": "4420" 00:19:08.714 }, 00:19:08.714 "peer_address": { 00:19:08.714 "trtype": "TCP", 00:19:08.714 "adrfam": "IPv4", 00:19:08.714 "traddr": "10.0.0.1", 00:19:08.714 "trsvcid": "53596" 00:19:08.714 }, 00:19:08.714 "auth": { 00:19:08.714 "state": "completed", 00:19:08.714 "digest": "sha384", 00:19:08.714 "dhgroup": "ffdhe4096" 00:19:08.714 } 00:19:08.714 } 00:19:08.714 ]' 00:19:08.714 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.714 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.714 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.972 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.972 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.972 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.972 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.972 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.230 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:09.230 10:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.795 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.054 00:19:10.313 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.313 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.313 10:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.313 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.313 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.313 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.313 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.313 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.313 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.313 { 00:19:10.313 "cntlid": 75, 00:19:10.313 "qid": 0, 00:19:10.313 "state": "enabled", 00:19:10.313 "thread": "nvmf_tgt_poll_group_000", 00:19:10.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:10.313 "listen_address": { 00:19:10.313 "trtype": "TCP", 00:19:10.313 "adrfam": "IPv4", 00:19:10.313 "traddr": "10.0.0.2", 00:19:10.313 "trsvcid": "4420" 00:19:10.313 }, 00:19:10.313 "peer_address": { 00:19:10.313 "trtype": "TCP", 00:19:10.313 "adrfam": "IPv4", 00:19:10.313 "traddr": "10.0.0.1", 00:19:10.313 "trsvcid": "41570" 00:19:10.313 }, 00:19:10.313 "auth": { 00:19:10.313 "state": "completed", 00:19:10.313 "digest": "sha384", 00:19:10.313 "dhgroup": "ffdhe4096" 00:19:10.313 } 00:19:10.313 } 00:19:10.313 ]' 00:19:10.313 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.570 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.570 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.570 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.570 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.570 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.570 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.570 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.828 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:10.828 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:11.394 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.394 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:11.394 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.394 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.394 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.394 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.394 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:11.394 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:11.394 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:11.394 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.394 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:11.394 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:11.394 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:11.394 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.394 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.394 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.395 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.653 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.653 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.653 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.653 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.911 00:19:11.911 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.911 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.911 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.911 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.911 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.911 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.911 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.911 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.911 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.911 { 00:19:11.911 "cntlid": 77, 00:19:11.911 "qid": 0, 00:19:11.911 "state": "enabled", 00:19:11.911 "thread": "nvmf_tgt_poll_group_000", 00:19:11.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:11.911 "listen_address": { 00:19:11.911 "trtype": "TCP", 00:19:11.911 "adrfam": "IPv4", 00:19:11.911 "traddr": "10.0.0.2", 00:19:11.911 "trsvcid": "4420" 00:19:11.911 }, 00:19:11.911 "peer_address": { 00:19:11.911 "trtype": "TCP", 00:19:11.911 "adrfam": "IPv4", 00:19:11.911 "traddr": "10.0.0.1", 00:19:11.911 "trsvcid": "41606" 00:19:11.911 }, 00:19:11.911 "auth": { 00:19:11.911 "state": "completed", 00:19:11.911 "digest": "sha384", 00:19:11.911 "dhgroup": "ffdhe4096" 00:19:11.911 } 00:19:11.911 } 00:19:11.911 ]' 00:19:11.911 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.170 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.170 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.170 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.170 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.170 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.170 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.170 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.428 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:12.428 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:12.993 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.993 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:12.993 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.993 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.993 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.993 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.993 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:12.993 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:12.993 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:12.993 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.994 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:12.994 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:12.994 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:12.994 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.994 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:12.994 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.994 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.994 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.994 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:12.994 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.994 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.251 00:19:13.509 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.509 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.509 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.509 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.509 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.509 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.509 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.509 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.509 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.509 { 00:19:13.509 "cntlid": 79, 00:19:13.509 "qid": 0, 00:19:13.509 "state": "enabled", 00:19:13.509 "thread": "nvmf_tgt_poll_group_000", 00:19:13.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:13.509 "listen_address": { 00:19:13.509 "trtype": "TCP", 00:19:13.509 "adrfam": "IPv4", 00:19:13.509 "traddr": "10.0.0.2", 00:19:13.509 "trsvcid": "4420" 00:19:13.509 }, 00:19:13.509 "peer_address": { 00:19:13.509 "trtype": "TCP", 00:19:13.509 "adrfam": "IPv4", 00:19:13.509 "traddr": "10.0.0.1", 00:19:13.509 "trsvcid": "41632" 00:19:13.509 }, 00:19:13.509 "auth": { 00:19:13.509 "state": "completed", 00:19:13.509 "digest": "sha384", 00:19:13.509 "dhgroup": "ffdhe4096" 00:19:13.509 } 00:19:13.509 } 00:19:13.509 ]' 00:19:13.509 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.767 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.767 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.767 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.767 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.767 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.767 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.767 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.025 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:14.025 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.591 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.158 00:19:15.158 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.158 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.158 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.158 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.158 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.158 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.158 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.158 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.158 { 00:19:15.158 "cntlid": 81, 00:19:15.158 "qid": 0, 00:19:15.158 "state": "enabled", 00:19:15.158 "thread": "nvmf_tgt_poll_group_000", 00:19:15.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:15.158 "listen_address": { 00:19:15.158 "trtype": "TCP", 00:19:15.158 "adrfam": "IPv4", 00:19:15.158 "traddr": "10.0.0.2", 00:19:15.158 "trsvcid": "4420" 00:19:15.158 }, 00:19:15.158 "peer_address": { 00:19:15.158 "trtype": "TCP", 00:19:15.158 "adrfam": "IPv4", 00:19:15.158 "traddr": "10.0.0.1", 00:19:15.158 "trsvcid": "41642" 00:19:15.158 }, 00:19:15.158 "auth": { 00:19:15.158 "state": "completed", 00:19:15.158 "digest": "sha384", 00:19:15.158 "dhgroup": "ffdhe6144" 00:19:15.158 } 00:19:15.158 } 00:19:15.158 ]' 00:19:15.158 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.415 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.415 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.415 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.415 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.415 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.415 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.415 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.673 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:15.673 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:16.240 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.240 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:16.240 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.240 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.240 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.240 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.240 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:16.240 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:16.240 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:16.240 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.241 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:16.241 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:16.241 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:16.241 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.241 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.241 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.241 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.241 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.241 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.241 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.241 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.808 00:19:16.808 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.808 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.808 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.808 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.808 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.808 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.808 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.808 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.808 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.808 { 00:19:16.808 "cntlid": 83, 00:19:16.808 "qid": 0, 00:19:16.808 "state": "enabled", 00:19:16.808 "thread": "nvmf_tgt_poll_group_000", 00:19:16.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:16.808 "listen_address": { 00:19:16.808 "trtype": "TCP", 00:19:16.808 "adrfam": "IPv4", 00:19:16.808 "traddr": "10.0.0.2", 00:19:16.808 "trsvcid": "4420" 00:19:16.808 }, 00:19:16.808 "peer_address": { 00:19:16.808 "trtype": "TCP", 00:19:16.808 "adrfam": "IPv4", 00:19:16.808 "traddr": "10.0.0.1", 00:19:16.808 "trsvcid": "41672" 00:19:16.808 }, 00:19:16.808 "auth": { 00:19:16.808 "state": "completed", 00:19:16.808 "digest": "sha384", 00:19:16.808 "dhgroup": "ffdhe6144" 00:19:16.808 } 00:19:16.808 } 00:19:16.808 ]' 00:19:16.808 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.066 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.066 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.066 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.066 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.067 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.067 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.067 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.324 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:17.324 10:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:17.890 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.890 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:17.890 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.890 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.890 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.890 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.890 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:17.890 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.148 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.405 00:19:18.405 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.405 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.405 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.664 { 00:19:18.664 "cntlid": 85, 00:19:18.664 "qid": 0, 00:19:18.664 "state": "enabled", 00:19:18.664 "thread": "nvmf_tgt_poll_group_000", 00:19:18.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:18.664 "listen_address": { 00:19:18.664 "trtype": "TCP", 00:19:18.664 "adrfam": "IPv4", 00:19:18.664 "traddr": "10.0.0.2", 00:19:18.664 "trsvcid": "4420" 00:19:18.664 }, 00:19:18.664 "peer_address": { 00:19:18.664 "trtype": "TCP", 00:19:18.664 "adrfam": "IPv4", 00:19:18.664 "traddr": "10.0.0.1", 00:19:18.664 "trsvcid": "41694" 00:19:18.664 }, 00:19:18.664 "auth": { 00:19:18.664 "state": "completed", 00:19:18.664 "digest": "sha384", 00:19:18.664 "dhgroup": "ffdhe6144" 00:19:18.664 } 00:19:18.664 } 00:19:18.664 ]' 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.664 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.922 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:18.922 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:19.488 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.488 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:19.488 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.488 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.488 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.488 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.488 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:19.488 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.746 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.004 00:19:20.004 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.004 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.004 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.262 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.262 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.262 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.262 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.262 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.262 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.262 { 00:19:20.262 "cntlid": 87, 00:19:20.262 "qid": 0, 00:19:20.262 "state": "enabled", 00:19:20.262 "thread": "nvmf_tgt_poll_group_000", 00:19:20.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:20.262 "listen_address": { 00:19:20.262 "trtype": "TCP", 00:19:20.262 "adrfam": "IPv4", 00:19:20.262 "traddr": "10.0.0.2", 00:19:20.262 "trsvcid": "4420" 00:19:20.262 }, 00:19:20.262 "peer_address": { 00:19:20.262 "trtype": "TCP", 00:19:20.262 "adrfam": "IPv4", 00:19:20.262 "traddr": "10.0.0.1", 00:19:20.262 "trsvcid": "41718" 00:19:20.262 }, 00:19:20.262 "auth": { 00:19:20.262 "state": "completed", 00:19:20.262 "digest": "sha384", 00:19:20.262 "dhgroup": "ffdhe6144" 00:19:20.262 } 00:19:20.262 } 00:19:20.262 ]' 00:19:20.262 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.262 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.262 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.262 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.262 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.519 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.519 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.520 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.520 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:20.520 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:21.084 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.085 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:21.085 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.085 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.085 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.085 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.085 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.085 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:21.085 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.343 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.910 00:19:21.910 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.910 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.910 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.168 { 00:19:22.168 "cntlid": 89, 00:19:22.168 "qid": 0, 00:19:22.168 "state": "enabled", 00:19:22.168 "thread": "nvmf_tgt_poll_group_000", 00:19:22.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:22.168 "listen_address": { 00:19:22.168 "trtype": "TCP", 00:19:22.168 "adrfam": "IPv4", 00:19:22.168 "traddr": "10.0.0.2", 00:19:22.168 "trsvcid": "4420" 00:19:22.168 }, 00:19:22.168 "peer_address": { 00:19:22.168 "trtype": "TCP", 00:19:22.168 "adrfam": "IPv4", 00:19:22.168 "traddr": "10.0.0.1", 00:19:22.168 "trsvcid": "54498" 00:19:22.168 }, 00:19:22.168 "auth": { 00:19:22.168 "state": "completed", 00:19:22.168 "digest": "sha384", 00:19:22.168 "dhgroup": "ffdhe8192" 00:19:22.168 } 00:19:22.168 } 00:19:22.168 ]' 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.168 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.426 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:22.426 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:22.994 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.994 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:22.994 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.994 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.994 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.994 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.994 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:22.994 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.252 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.510 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.769 { 00:19:23.769 "cntlid": 91, 00:19:23.769 "qid": 0, 00:19:23.769 "state": "enabled", 00:19:23.769 "thread": "nvmf_tgt_poll_group_000", 00:19:23.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:23.769 "listen_address": { 00:19:23.769 "trtype": "TCP", 00:19:23.769 "adrfam": "IPv4", 00:19:23.769 "traddr": "10.0.0.2", 00:19:23.769 "trsvcid": "4420" 00:19:23.769 }, 00:19:23.769 "peer_address": { 00:19:23.769 "trtype": "TCP", 00:19:23.769 "adrfam": "IPv4", 00:19:23.769 "traddr": "10.0.0.1", 00:19:23.769 "trsvcid": "54520" 00:19:23.769 }, 00:19:23.769 "auth": { 00:19:23.769 "state": "completed", 00:19:23.769 "digest": "sha384", 00:19:23.769 "dhgroup": "ffdhe8192" 00:19:23.769 } 00:19:23.769 } 00:19:23.769 ]' 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.769 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.026 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.026 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.026 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.026 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.026 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.284 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:24.284 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.850 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.109 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.109 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.109 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.109 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.367 00:19:25.367 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.367 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.367 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.625 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.625 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.625 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.625 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.625 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.625 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.625 { 00:19:25.625 "cntlid": 93, 00:19:25.625 "qid": 0, 00:19:25.625 "state": "enabled", 00:19:25.625 "thread": "nvmf_tgt_poll_group_000", 00:19:25.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:25.625 "listen_address": { 00:19:25.625 "trtype": "TCP", 00:19:25.625 "adrfam": "IPv4", 00:19:25.625 "traddr": "10.0.0.2", 00:19:25.625 "trsvcid": "4420" 00:19:25.625 }, 00:19:25.625 "peer_address": { 00:19:25.625 "trtype": "TCP", 00:19:25.625 "adrfam": "IPv4", 00:19:25.625 "traddr": "10.0.0.1", 00:19:25.625 "trsvcid": "54560" 00:19:25.625 }, 00:19:25.625 "auth": { 00:19:25.625 "state": "completed", 00:19:25.625 "digest": "sha384", 00:19:25.625 "dhgroup": "ffdhe8192" 00:19:25.625 } 00:19:25.625 } 00:19:25.625 ]' 00:19:25.625 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.625 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.625 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.883 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.883 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.883 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.883 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.883 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.883 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:25.883 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:26.450 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.450 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:26.450 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.450 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.709 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.277 00:19:27.277 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.277 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.277 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.535 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.535 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.535 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.535 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.536 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.536 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.536 { 00:19:27.536 "cntlid": 95, 00:19:27.536 "qid": 0, 00:19:27.536 "state": "enabled", 00:19:27.536 "thread": "nvmf_tgt_poll_group_000", 00:19:27.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:27.536 "listen_address": { 00:19:27.536 "trtype": "TCP", 00:19:27.536 "adrfam": "IPv4", 00:19:27.536 "traddr": "10.0.0.2", 00:19:27.536 "trsvcid": "4420" 00:19:27.536 }, 00:19:27.536 "peer_address": { 00:19:27.536 "trtype": "TCP", 00:19:27.536 "adrfam": "IPv4", 00:19:27.536 "traddr": "10.0.0.1", 00:19:27.536 "trsvcid": "54606" 00:19:27.536 }, 00:19:27.536 "auth": { 00:19:27.536 "state": "completed", 00:19:27.536 "digest": "sha384", 00:19:27.536 "dhgroup": "ffdhe8192" 00:19:27.536 } 00:19:27.536 } 00:19:27.536 ]' 00:19:27.536 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.536 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.536 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.536 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.536 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.536 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.536 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.536 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.795 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:27.795 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:28.362 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.362 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:28.362 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.362 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.362 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.362 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:28.362 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.362 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.362 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:28.362 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.622 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.881 00:19:28.881 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.881 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.881 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.141 { 00:19:29.141 "cntlid": 97, 00:19:29.141 "qid": 0, 00:19:29.141 "state": "enabled", 00:19:29.141 "thread": "nvmf_tgt_poll_group_000", 00:19:29.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:29.141 "listen_address": { 00:19:29.141 "trtype": "TCP", 00:19:29.141 "adrfam": "IPv4", 00:19:29.141 "traddr": "10.0.0.2", 00:19:29.141 "trsvcid": "4420" 00:19:29.141 }, 00:19:29.141 "peer_address": { 00:19:29.141 "trtype": "TCP", 00:19:29.141 "adrfam": "IPv4", 00:19:29.141 "traddr": "10.0.0.1", 00:19:29.141 "trsvcid": "54622" 00:19:29.141 }, 00:19:29.141 "auth": { 00:19:29.141 "state": "completed", 00:19:29.141 "digest": "sha512", 00:19:29.141 "dhgroup": "null" 00:19:29.141 } 00:19:29.141 } 00:19:29.141 ]' 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.141 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.399 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:29.399 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:29.969 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.969 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:29.969 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.969 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.969 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.969 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.969 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:29.969 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.227 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.486 00:19:30.486 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.486 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.486 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.746 { 00:19:30.746 "cntlid": 99, 00:19:30.746 "qid": 0, 00:19:30.746 "state": "enabled", 00:19:30.746 "thread": "nvmf_tgt_poll_group_000", 00:19:30.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:30.746 "listen_address": { 00:19:30.746 "trtype": "TCP", 00:19:30.746 "adrfam": "IPv4", 00:19:30.746 "traddr": "10.0.0.2", 00:19:30.746 "trsvcid": "4420" 00:19:30.746 }, 00:19:30.746 "peer_address": { 00:19:30.746 "trtype": "TCP", 00:19:30.746 "adrfam": "IPv4", 00:19:30.746 "traddr": "10.0.0.1", 00:19:30.746 "trsvcid": "56884" 00:19:30.746 }, 00:19:30.746 "auth": { 00:19:30.746 "state": "completed", 00:19:30.746 "digest": "sha512", 00:19:30.746 "dhgroup": "null" 00:19:30.746 } 00:19:30.746 } 00:19:30.746 ]' 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.746 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.005 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:31.005 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:31.579 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.579 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:31.579 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.579 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.579 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.579 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.579 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:31.579 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.890 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.890 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.155 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.155 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.155 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.155 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.155 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.155 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.155 { 00:19:32.155 "cntlid": 101, 00:19:32.155 "qid": 0, 00:19:32.155 "state": "enabled", 00:19:32.155 "thread": "nvmf_tgt_poll_group_000", 00:19:32.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:32.155 "listen_address": { 00:19:32.155 "trtype": "TCP", 00:19:32.155 "adrfam": "IPv4", 00:19:32.155 "traddr": "10.0.0.2", 00:19:32.155 "trsvcid": "4420" 00:19:32.155 }, 00:19:32.155 "peer_address": { 00:19:32.155 "trtype": "TCP", 00:19:32.155 "adrfam": "IPv4", 00:19:32.155 "traddr": "10.0.0.1", 00:19:32.155 "trsvcid": "56900" 00:19:32.155 }, 00:19:32.155 "auth": { 00:19:32.155 "state": "completed", 00:19:32.155 "digest": "sha512", 00:19:32.155 "dhgroup": "null" 00:19:32.155 } 00:19:32.155 } 00:19:32.155 ]' 00:19:32.155 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.155 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.155 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.429 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:32.429 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.429 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.429 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.429 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.429 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:32.429 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:33.072 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.072 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:33.072 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.072 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.072 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.072 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.072 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:33.072 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.330 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.588 00:19:33.588 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.588 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.588 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.847 { 00:19:33.847 "cntlid": 103, 00:19:33.847 "qid": 0, 00:19:33.847 "state": "enabled", 00:19:33.847 "thread": "nvmf_tgt_poll_group_000", 00:19:33.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:33.847 "listen_address": { 00:19:33.847 "trtype": "TCP", 00:19:33.847 "adrfam": "IPv4", 00:19:33.847 "traddr": "10.0.0.2", 00:19:33.847 "trsvcid": "4420" 00:19:33.847 }, 00:19:33.847 "peer_address": { 00:19:33.847 "trtype": "TCP", 00:19:33.847 "adrfam": "IPv4", 00:19:33.847 "traddr": "10.0.0.1", 00:19:33.847 "trsvcid": "56918" 00:19:33.847 }, 00:19:33.847 "auth": { 00:19:33.847 "state": "completed", 00:19:33.847 "digest": "sha512", 00:19:33.847 "dhgroup": "null" 00:19:33.847 } 00:19:33.847 } 00:19:33.847 ]' 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.847 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.106 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:34.106 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:34.672 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.672 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:34.672 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.672 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.672 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.672 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.672 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.672 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.672 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.930 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.187 00:19:35.187 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.187 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.187 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.444 { 00:19:35.444 "cntlid": 105, 00:19:35.444 "qid": 0, 00:19:35.444 "state": "enabled", 00:19:35.444 "thread": "nvmf_tgt_poll_group_000", 00:19:35.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:35.444 "listen_address": { 00:19:35.444 "trtype": "TCP", 00:19:35.444 "adrfam": "IPv4", 00:19:35.444 "traddr": "10.0.0.2", 00:19:35.444 "trsvcid": "4420" 00:19:35.444 }, 00:19:35.444 "peer_address": { 00:19:35.444 "trtype": "TCP", 00:19:35.444 "adrfam": "IPv4", 00:19:35.444 "traddr": "10.0.0.1", 00:19:35.444 "trsvcid": "56938" 00:19:35.444 }, 00:19:35.444 "auth": { 00:19:35.444 "state": "completed", 00:19:35.444 "digest": "sha512", 00:19:35.444 "dhgroup": "ffdhe2048" 00:19:35.444 } 00:19:35.444 } 00:19:35.444 ]' 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.444 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.445 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.445 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.703 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:35.703 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:36.269 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.269 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:36.269 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.269 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.269 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.269 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.269 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.269 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.528 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.787 00:19:36.787 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.787 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.787 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.787 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.787 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.787 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.787 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.787 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.787 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.787 { 00:19:36.787 "cntlid": 107, 00:19:36.787 "qid": 0, 00:19:36.787 "state": "enabled", 00:19:36.787 "thread": "nvmf_tgt_poll_group_000", 00:19:36.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:36.787 "listen_address": { 00:19:36.787 "trtype": "TCP", 00:19:36.787 "adrfam": "IPv4", 00:19:36.787 "traddr": "10.0.0.2", 00:19:36.787 "trsvcid": "4420" 00:19:36.787 }, 00:19:36.787 "peer_address": { 00:19:36.787 "trtype": "TCP", 00:19:36.787 "adrfam": "IPv4", 00:19:36.787 "traddr": "10.0.0.1", 00:19:36.787 "trsvcid": "56968" 00:19:36.787 }, 00:19:36.787 "auth": { 00:19:36.787 "state": "completed", 00:19:36.787 "digest": "sha512", 00:19:36.787 "dhgroup": "ffdhe2048" 00:19:36.787 } 00:19:36.787 } 00:19:36.787 ]' 00:19:36.787 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.045 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.045 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.045 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.045 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.045 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.045 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.045 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.304 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:37.304 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:37.871 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.871 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:37.871 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.871 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.871 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.871 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.871 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:37.871 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:37.871 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:37.871 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.871 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:37.872 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:37.872 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.872 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.872 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.872 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.872 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.131 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.131 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.131 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.131 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.131 00:19:38.389 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.389 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.389 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.389 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.389 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.389 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.389 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.389 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.389 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.389 { 00:19:38.389 "cntlid": 109, 00:19:38.389 "qid": 0, 00:19:38.389 "state": "enabled", 00:19:38.389 "thread": "nvmf_tgt_poll_group_000", 00:19:38.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:38.389 "listen_address": { 00:19:38.389 "trtype": "TCP", 00:19:38.389 "adrfam": "IPv4", 00:19:38.389 "traddr": "10.0.0.2", 00:19:38.389 "trsvcid": "4420" 00:19:38.389 }, 00:19:38.389 "peer_address": { 00:19:38.389 "trtype": "TCP", 00:19:38.389 "adrfam": "IPv4", 00:19:38.389 "traddr": "10.0.0.1", 00:19:38.389 "trsvcid": "57004" 00:19:38.389 }, 00:19:38.389 "auth": { 00:19:38.389 "state": "completed", 00:19:38.389 "digest": "sha512", 00:19:38.389 "dhgroup": "ffdhe2048" 00:19:38.389 } 00:19:38.389 } 00:19:38.389 ]' 00:19:38.389 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.389 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.389 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.648 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.648 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.648 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.648 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.648 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.648 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:38.648 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:39.215 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.474 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.732 00:19:39.732 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.732 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.732 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.990 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.990 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.990 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.991 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.991 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.991 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.991 { 00:19:39.991 "cntlid": 111, 00:19:39.991 "qid": 0, 00:19:39.991 "state": "enabled", 00:19:39.991 "thread": "nvmf_tgt_poll_group_000", 00:19:39.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:39.991 "listen_address": { 00:19:39.991 "trtype": "TCP", 00:19:39.991 "adrfam": "IPv4", 00:19:39.991 "traddr": "10.0.0.2", 00:19:39.991 "trsvcid": "4420" 00:19:39.991 }, 00:19:39.991 "peer_address": { 00:19:39.991 "trtype": "TCP", 00:19:39.991 "adrfam": "IPv4", 00:19:39.991 "traddr": "10.0.0.1", 00:19:39.991 "trsvcid": "57048" 00:19:39.991 }, 00:19:39.991 "auth": { 00:19:39.991 "state": "completed", 00:19:39.991 "digest": "sha512", 00:19:39.991 "dhgroup": "ffdhe2048" 00:19:39.991 } 00:19:39.991 } 00:19:39.991 ]' 00:19:39.991 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.991 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.991 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.249 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.249 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.249 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.249 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.249 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.249 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:40.249 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:40.815 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.815 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.073 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.332 00:19:41.332 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.332 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.332 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.590 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.590 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.590 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.590 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.590 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.590 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.590 { 00:19:41.590 "cntlid": 113, 00:19:41.590 "qid": 0, 00:19:41.590 "state": "enabled", 00:19:41.590 "thread": "nvmf_tgt_poll_group_000", 00:19:41.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:41.590 "listen_address": { 00:19:41.590 "trtype": "TCP", 00:19:41.590 "adrfam": "IPv4", 00:19:41.590 "traddr": "10.0.0.2", 00:19:41.590 "trsvcid": "4420" 00:19:41.590 }, 00:19:41.590 "peer_address": { 00:19:41.590 "trtype": "TCP", 00:19:41.590 "adrfam": "IPv4", 00:19:41.590 "traddr": "10.0.0.1", 00:19:41.590 "trsvcid": "40836" 00:19:41.590 }, 00:19:41.590 "auth": { 00:19:41.590 "state": "completed", 00:19:41.590 "digest": "sha512", 00:19:41.590 "dhgroup": "ffdhe3072" 00:19:41.590 } 00:19:41.590 } 00:19:41.590 ]' 00:19:41.590 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.590 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.590 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.848 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.848 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.848 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.848 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.848 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.848 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:41.848 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:42.416 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.416 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:42.416 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.416 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.416 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.674 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.674 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:42.674 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:42.674 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:42.674 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.675 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:42.675 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:42.675 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:42.675 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.675 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.675 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.675 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.675 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.675 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.675 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.675 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.933 00:19:42.933 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.933 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.933 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.192 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.192 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.192 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.192 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.192 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.192 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.192 { 00:19:43.192 "cntlid": 115, 00:19:43.192 "qid": 0, 00:19:43.192 "state": "enabled", 00:19:43.192 "thread": "nvmf_tgt_poll_group_000", 00:19:43.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:43.192 "listen_address": { 00:19:43.192 "trtype": "TCP", 00:19:43.192 "adrfam": "IPv4", 00:19:43.192 "traddr": "10.0.0.2", 00:19:43.192 "trsvcid": "4420" 00:19:43.192 }, 00:19:43.192 "peer_address": { 00:19:43.192 "trtype": "TCP", 00:19:43.192 "adrfam": "IPv4", 00:19:43.192 "traddr": "10.0.0.1", 00:19:43.192 "trsvcid": "40866" 00:19:43.192 }, 00:19:43.192 "auth": { 00:19:43.192 "state": "completed", 00:19:43.192 "digest": "sha512", 00:19:43.192 "dhgroup": "ffdhe3072" 00:19:43.192 } 00:19:43.192 } 00:19:43.192 ]' 00:19:43.192 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.192 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.192 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.192 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.192 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.450 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.450 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.450 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.450 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:43.450 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:44.016 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.016 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:44.016 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.016 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.016 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.016 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.016 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:44.016 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.274 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.533 00:19:44.533 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.533 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.533 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.792 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.792 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.792 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.792 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.792 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.792 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.792 { 00:19:44.792 "cntlid": 117, 00:19:44.792 "qid": 0, 00:19:44.792 "state": "enabled", 00:19:44.792 "thread": "nvmf_tgt_poll_group_000", 00:19:44.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:44.792 "listen_address": { 00:19:44.792 "trtype": "TCP", 00:19:44.792 "adrfam": "IPv4", 00:19:44.792 "traddr": "10.0.0.2", 00:19:44.792 "trsvcid": "4420" 00:19:44.792 }, 00:19:44.792 "peer_address": { 00:19:44.792 "trtype": "TCP", 00:19:44.792 "adrfam": "IPv4", 00:19:44.792 "traddr": "10.0.0.1", 00:19:44.792 "trsvcid": "40886" 00:19:44.792 }, 00:19:44.792 "auth": { 00:19:44.792 "state": "completed", 00:19:44.792 "digest": "sha512", 00:19:44.792 "dhgroup": "ffdhe3072" 00:19:44.792 } 00:19:44.792 } 00:19:44.792 ]' 00:19:44.792 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.792 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.792 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.792 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.792 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.051 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.051 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.051 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.051 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:45.051 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:45.618 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.618 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:45.618 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.618 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.618 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.618 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.618 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:45.618 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.877 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.136 00:19:46.136 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.136 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.136 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.394 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.394 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.394 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.395 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.395 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.395 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.395 { 00:19:46.395 "cntlid": 119, 00:19:46.395 "qid": 0, 00:19:46.395 "state": "enabled", 00:19:46.395 "thread": "nvmf_tgt_poll_group_000", 00:19:46.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:46.395 "listen_address": { 00:19:46.395 "trtype": "TCP", 00:19:46.395 "adrfam": "IPv4", 00:19:46.395 "traddr": "10.0.0.2", 00:19:46.395 "trsvcid": "4420" 00:19:46.395 }, 00:19:46.395 "peer_address": { 00:19:46.395 "trtype": "TCP", 00:19:46.395 "adrfam": "IPv4", 00:19:46.395 "traddr": "10.0.0.1", 00:19:46.395 "trsvcid": "40928" 00:19:46.395 }, 00:19:46.395 "auth": { 00:19:46.395 "state": "completed", 00:19:46.395 "digest": "sha512", 00:19:46.395 "dhgroup": "ffdhe3072" 00:19:46.395 } 00:19:46.395 } 00:19:46.395 ]' 00:19:46.395 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.395 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.395 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.395 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.395 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.395 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.395 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.395 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.653 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:46.654 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:47.222 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.222 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:47.222 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.222 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.222 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.222 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.222 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.222 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.222 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.481 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.739 00:19:47.739 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.739 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.740 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.998 { 00:19:47.998 "cntlid": 121, 00:19:47.998 "qid": 0, 00:19:47.998 "state": "enabled", 00:19:47.998 "thread": "nvmf_tgt_poll_group_000", 00:19:47.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:47.998 "listen_address": { 00:19:47.998 "trtype": "TCP", 00:19:47.998 "adrfam": "IPv4", 00:19:47.998 "traddr": "10.0.0.2", 00:19:47.998 "trsvcid": "4420" 00:19:47.998 }, 00:19:47.998 "peer_address": { 00:19:47.998 "trtype": "TCP", 00:19:47.998 "adrfam": "IPv4", 00:19:47.998 "traddr": "10.0.0.1", 00:19:47.998 "trsvcid": "40958" 00:19:47.998 }, 00:19:47.998 "auth": { 00:19:47.998 "state": "completed", 00:19:47.998 "digest": "sha512", 00:19:47.998 "dhgroup": "ffdhe4096" 00:19:47.998 } 00:19:47.998 } 00:19:47.998 ]' 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.998 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.257 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:48.257 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:48.823 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.823 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:48.823 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.823 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.823 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.823 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.823 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:48.823 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.082 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.341 00:19:49.341 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.341 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.341 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.599 { 00:19:49.599 "cntlid": 123, 00:19:49.599 "qid": 0, 00:19:49.599 "state": "enabled", 00:19:49.599 "thread": "nvmf_tgt_poll_group_000", 00:19:49.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:49.599 "listen_address": { 00:19:49.599 "trtype": "TCP", 00:19:49.599 "adrfam": "IPv4", 00:19:49.599 "traddr": "10.0.0.2", 00:19:49.599 "trsvcid": "4420" 00:19:49.599 }, 00:19:49.599 "peer_address": { 00:19:49.599 "trtype": "TCP", 00:19:49.599 "adrfam": "IPv4", 00:19:49.599 "traddr": "10.0.0.1", 00:19:49.599 "trsvcid": "40988" 00:19:49.599 }, 00:19:49.599 "auth": { 00:19:49.599 "state": "completed", 00:19:49.599 "digest": "sha512", 00:19:49.599 "dhgroup": "ffdhe4096" 00:19:49.599 } 00:19:49.599 } 00:19:49.599 ]' 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.599 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.857 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:49.857 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:50.423 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.423 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:50.423 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.423 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.423 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.423 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.423 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:50.423 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.682 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.940 00:19:50.940 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.940 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.940 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.198 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.198 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.198 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.198 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.199 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.199 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.199 { 00:19:51.199 "cntlid": 125, 00:19:51.199 "qid": 0, 00:19:51.199 "state": "enabled", 00:19:51.199 "thread": "nvmf_tgt_poll_group_000", 00:19:51.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:51.199 "listen_address": { 00:19:51.199 "trtype": "TCP", 00:19:51.199 "adrfam": "IPv4", 00:19:51.199 "traddr": "10.0.0.2", 00:19:51.199 "trsvcid": "4420" 00:19:51.199 }, 00:19:51.199 "peer_address": { 00:19:51.199 "trtype": "TCP", 00:19:51.199 "adrfam": "IPv4", 00:19:51.199 "traddr": "10.0.0.1", 00:19:51.199 "trsvcid": "53798" 00:19:51.199 }, 00:19:51.199 "auth": { 00:19:51.199 "state": "completed", 00:19:51.199 "digest": "sha512", 00:19:51.199 "dhgroup": "ffdhe4096" 00:19:51.199 } 00:19:51.199 } 00:19:51.199 ]' 00:19:51.199 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.199 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.199 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.199 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.199 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.199 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.199 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.199 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.457 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:51.457 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:52.023 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.023 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:52.023 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.023 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.023 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.023 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.023 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:52.023 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.282 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.541 00:19:52.541 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.541 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.541 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.799 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.800 { 00:19:52.800 "cntlid": 127, 00:19:52.800 "qid": 0, 00:19:52.800 "state": "enabled", 00:19:52.800 "thread": "nvmf_tgt_poll_group_000", 00:19:52.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:52.800 "listen_address": { 00:19:52.800 "trtype": "TCP", 00:19:52.800 "adrfam": "IPv4", 00:19:52.800 "traddr": "10.0.0.2", 00:19:52.800 "trsvcid": "4420" 00:19:52.800 }, 00:19:52.800 "peer_address": { 00:19:52.800 "trtype": "TCP", 00:19:52.800 "adrfam": "IPv4", 00:19:52.800 "traddr": "10.0.0.1", 00:19:52.800 "trsvcid": "53828" 00:19:52.800 }, 00:19:52.800 "auth": { 00:19:52.800 "state": "completed", 00:19:52.800 "digest": "sha512", 00:19:52.800 "dhgroup": "ffdhe4096" 00:19:52.800 } 00:19:52.800 } 00:19:52.800 ]' 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.800 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.059 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:53.059 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:53.626 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.626 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:53.626 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.626 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.626 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.626 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.626 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.626 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:53.626 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.885 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.143 00:19:54.143 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.143 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.143 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.401 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.401 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.401 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.401 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.401 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.401 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.401 { 00:19:54.401 "cntlid": 129, 00:19:54.401 "qid": 0, 00:19:54.401 "state": "enabled", 00:19:54.401 "thread": "nvmf_tgt_poll_group_000", 00:19:54.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:54.401 "listen_address": { 00:19:54.401 "trtype": "TCP", 00:19:54.401 "adrfam": "IPv4", 00:19:54.402 "traddr": "10.0.0.2", 00:19:54.402 "trsvcid": "4420" 00:19:54.402 }, 00:19:54.402 "peer_address": { 00:19:54.402 "trtype": "TCP", 00:19:54.402 "adrfam": "IPv4", 00:19:54.402 "traddr": "10.0.0.1", 00:19:54.402 "trsvcid": "53856" 00:19:54.402 }, 00:19:54.402 "auth": { 00:19:54.402 "state": "completed", 00:19:54.402 "digest": "sha512", 00:19:54.402 "dhgroup": "ffdhe6144" 00:19:54.402 } 00:19:54.402 } 00:19:54.402 ]' 00:19:54.402 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.402 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.402 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.402 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.402 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.402 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.402 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.402 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.660 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:54.660 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:19:55.228 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.228 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:55.228 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.228 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.228 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.228 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.228 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:55.228 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.487 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.746 00:19:55.746 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.746 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.746 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.004 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.004 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.004 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.004 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.004 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.004 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.004 { 00:19:56.004 "cntlid": 131, 00:19:56.004 "qid": 0, 00:19:56.004 "state": "enabled", 00:19:56.004 "thread": "nvmf_tgt_poll_group_000", 00:19:56.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:56.004 "listen_address": { 00:19:56.004 "trtype": "TCP", 00:19:56.004 "adrfam": "IPv4", 00:19:56.004 "traddr": "10.0.0.2", 00:19:56.004 "trsvcid": "4420" 00:19:56.004 }, 00:19:56.004 "peer_address": { 00:19:56.004 "trtype": "TCP", 00:19:56.004 "adrfam": "IPv4", 00:19:56.004 "traddr": "10.0.0.1", 00:19:56.004 "trsvcid": "53888" 00:19:56.004 }, 00:19:56.004 "auth": { 00:19:56.004 "state": "completed", 00:19:56.004 "digest": "sha512", 00:19:56.004 "dhgroup": "ffdhe6144" 00:19:56.004 } 00:19:56.004 } 00:19:56.004 ]' 00:19:56.004 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.004 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.004 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.262 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:56.262 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.262 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.262 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.262 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.262 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:56.262 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:19:56.828 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.086 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.654 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.654 { 00:19:57.654 "cntlid": 133, 00:19:57.654 "qid": 0, 00:19:57.654 "state": "enabled", 00:19:57.654 "thread": "nvmf_tgt_poll_group_000", 00:19:57.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:57.654 "listen_address": { 00:19:57.654 "trtype": "TCP", 00:19:57.654 "adrfam": "IPv4", 00:19:57.654 "traddr": "10.0.0.2", 00:19:57.654 "trsvcid": "4420" 00:19:57.654 }, 00:19:57.654 "peer_address": { 00:19:57.654 "trtype": "TCP", 00:19:57.654 "adrfam": "IPv4", 00:19:57.654 "traddr": "10.0.0.1", 00:19:57.654 "trsvcid": "53918" 00:19:57.654 }, 00:19:57.654 "auth": { 00:19:57.654 "state": "completed", 00:19:57.654 "digest": "sha512", 00:19:57.654 "dhgroup": "ffdhe6144" 00:19:57.654 } 00:19:57.654 } 00:19:57.654 ]' 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.654 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.912 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.912 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.912 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.912 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.912 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.170 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:58.170 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.737 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.303 00:19:59.303 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.303 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.303 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.303 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.303 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.303 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.303 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.303 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.303 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.303 { 00:19:59.303 "cntlid": 135, 00:19:59.303 "qid": 0, 00:19:59.303 "state": "enabled", 00:19:59.303 "thread": "nvmf_tgt_poll_group_000", 00:19:59.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:59.303 "listen_address": { 00:19:59.303 "trtype": "TCP", 00:19:59.303 "adrfam": "IPv4", 00:19:59.303 "traddr": "10.0.0.2", 00:19:59.303 "trsvcid": "4420" 00:19:59.303 }, 00:19:59.303 "peer_address": { 00:19:59.303 "trtype": "TCP", 00:19:59.303 "adrfam": "IPv4", 00:19:59.303 "traddr": "10.0.0.1", 00:19:59.303 "trsvcid": "53946" 00:19:59.303 }, 00:19:59.303 "auth": { 00:19:59.303 "state": "completed", 00:19:59.303 "digest": "sha512", 00:19:59.303 "dhgroup": "ffdhe6144" 00:19:59.303 } 00:19:59.303 } 00:19:59.303 ]' 00:19:59.303 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.303 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.559 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.559 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:59.559 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.559 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.559 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.559 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.816 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:19:59.816 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:20:00.382 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.382 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:00.382 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.382 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.382 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.382 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.382 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.382 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:00.382 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.382 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.949 00:20:00.949 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.949 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.949 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.210 { 00:20:01.210 "cntlid": 137, 00:20:01.210 "qid": 0, 00:20:01.210 "state": "enabled", 00:20:01.210 "thread": "nvmf_tgt_poll_group_000", 00:20:01.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:01.210 "listen_address": { 00:20:01.210 "trtype": "TCP", 00:20:01.210 "adrfam": "IPv4", 00:20:01.210 "traddr": "10.0.0.2", 00:20:01.210 "trsvcid": "4420" 00:20:01.210 }, 00:20:01.210 "peer_address": { 00:20:01.210 "trtype": "TCP", 00:20:01.210 "adrfam": "IPv4", 00:20:01.210 "traddr": "10.0.0.1", 00:20:01.210 "trsvcid": "49344" 00:20:01.210 }, 00:20:01.210 "auth": { 00:20:01.210 "state": "completed", 00:20:01.210 "digest": "sha512", 00:20:01.210 "dhgroup": "ffdhe8192" 00:20:01.210 } 00:20:01.210 } 00:20:01.210 ]' 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.210 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.501 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:20:01.501 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:20:02.089 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.089 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:02.089 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.089 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.089 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.089 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.089 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:02.089 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:02.347 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:02.347 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.347 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:02.347 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:02.347 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.347 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.347 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.347 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.347 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.347 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.348 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.348 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.348 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.914 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.915 { 00:20:02.915 "cntlid": 139, 00:20:02.915 "qid": 0, 00:20:02.915 "state": "enabled", 00:20:02.915 "thread": "nvmf_tgt_poll_group_000", 00:20:02.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:02.915 "listen_address": { 00:20:02.915 "trtype": "TCP", 00:20:02.915 "adrfam": "IPv4", 00:20:02.915 "traddr": "10.0.0.2", 00:20:02.915 "trsvcid": "4420" 00:20:02.915 }, 00:20:02.915 "peer_address": { 00:20:02.915 "trtype": "TCP", 00:20:02.915 "adrfam": "IPv4", 00:20:02.915 "traddr": "10.0.0.1", 00:20:02.915 "trsvcid": "49370" 00:20:02.915 }, 00:20:02.915 "auth": { 00:20:02.915 "state": "completed", 00:20:02.915 "digest": "sha512", 00:20:02.915 "dhgroup": "ffdhe8192" 00:20:02.915 } 00:20:02.915 } 00:20:02.915 ]' 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.915 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.173 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.173 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.173 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.173 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.173 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.430 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:20:03.430 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: --dhchap-ctrl-secret DHHC-1:02:NmM4ZjgwMDA4Y2E2NGFjNTZlMzg3MDhlYzEzNTM0NTA2Nzg5OGZiOWYzYjU4YTM5UG4Mzg==: 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.997 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.564 00:20:04.564 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.564 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.564 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.822 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.822 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.822 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.822 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.822 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.822 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.822 { 00:20:04.823 "cntlid": 141, 00:20:04.823 "qid": 0, 00:20:04.823 "state": "enabled", 00:20:04.823 "thread": "nvmf_tgt_poll_group_000", 00:20:04.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:04.823 "listen_address": { 00:20:04.823 "trtype": "TCP", 00:20:04.823 "adrfam": "IPv4", 00:20:04.823 "traddr": "10.0.0.2", 00:20:04.823 "trsvcid": "4420" 00:20:04.823 }, 00:20:04.823 "peer_address": { 00:20:04.823 "trtype": "TCP", 00:20:04.823 "adrfam": "IPv4", 00:20:04.823 "traddr": "10.0.0.1", 00:20:04.823 "trsvcid": "49394" 00:20:04.823 }, 00:20:04.823 "auth": { 00:20:04.823 "state": "completed", 00:20:04.823 "digest": "sha512", 00:20:04.823 "dhgroup": "ffdhe8192" 00:20:04.823 } 00:20:04.823 } 00:20:04.823 ]' 00:20:04.823 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.823 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.823 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.823 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.823 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.823 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.823 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.823 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.081 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:20:05.081 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:01:MDJlNTA4YmFkMTkyNDVkNWU4ZDI2MmUzYTE1Y2M2YjL8COlw: 00:20:05.662 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.662 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:05.662 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.662 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.662 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.662 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.662 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:05.662 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.921 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.488 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.488 { 00:20:06.488 "cntlid": 143, 00:20:06.488 "qid": 0, 00:20:06.488 "state": "enabled", 00:20:06.488 "thread": "nvmf_tgt_poll_group_000", 00:20:06.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:06.488 "listen_address": { 00:20:06.488 "trtype": "TCP", 00:20:06.488 "adrfam": "IPv4", 00:20:06.488 "traddr": "10.0.0.2", 00:20:06.488 "trsvcid": "4420" 00:20:06.488 }, 00:20:06.488 "peer_address": { 00:20:06.488 "trtype": "TCP", 00:20:06.488 "adrfam": "IPv4", 00:20:06.488 "traddr": "10.0.0.1", 00:20:06.488 "trsvcid": "49416" 00:20:06.488 }, 00:20:06.488 "auth": { 00:20:06.488 "state": "completed", 00:20:06.488 "digest": "sha512", 00:20:06.488 "dhgroup": "ffdhe8192" 00:20:06.488 } 00:20:06.488 } 00:20:06.488 ]' 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.488 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.746 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:06.746 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.746 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.746 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.746 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.746 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:20:06.746 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:20:07.312 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.571 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.138 00:20:08.138 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.138 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.138 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.397 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.397 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.397 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.397 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.397 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.397 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.397 { 00:20:08.397 "cntlid": 145, 00:20:08.397 "qid": 0, 00:20:08.397 "state": "enabled", 00:20:08.397 "thread": "nvmf_tgt_poll_group_000", 00:20:08.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:08.397 "listen_address": { 00:20:08.397 "trtype": "TCP", 00:20:08.397 "adrfam": "IPv4", 00:20:08.397 "traddr": "10.0.0.2", 00:20:08.397 "trsvcid": "4420" 00:20:08.397 }, 00:20:08.397 "peer_address": { 00:20:08.397 "trtype": "TCP", 00:20:08.397 "adrfam": "IPv4", 00:20:08.397 "traddr": "10.0.0.1", 00:20:08.397 "trsvcid": "49440" 00:20:08.397 }, 00:20:08.397 "auth": { 00:20:08.397 "state": "completed", 00:20:08.397 "digest": "sha512", 00:20:08.397 "dhgroup": "ffdhe8192" 00:20:08.397 } 00:20:08.397 } 00:20:08.397 ]' 00:20:08.397 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.397 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.397 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.398 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.398 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.398 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.398 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.398 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.657 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:20:08.657 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NjcwZDFkZjI3ZGM1MzEwMGViYmQyMDFkZjM0ZjdlNmQwOWMzZjU1OTczMjc1NzUwYUx+rA==: --dhchap-ctrl-secret DHHC-1:03:OTgyYTNmNWEwMTQ5OWQ3NDQyZTY0NDNiZWI1ZGRlZDRmOWY5ZWQ2MTM4NDE4NzY2ZmQzM2JmNzZlZWJlMTlkMC2WQ8o=: 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:09.224 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:09.791 request: 00:20:09.791 { 00:20:09.791 "name": "nvme0", 00:20:09.791 "trtype": "tcp", 00:20:09.791 "traddr": "10.0.0.2", 00:20:09.791 "adrfam": "ipv4", 00:20:09.791 "trsvcid": "4420", 00:20:09.791 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:09.791 "prchk_reftag": false, 00:20:09.791 "prchk_guard": false, 00:20:09.791 "hdgst": false, 00:20:09.791 "ddgst": false, 00:20:09.791 "dhchap_key": "key2", 00:20:09.791 "allow_unrecognized_csi": false, 00:20:09.791 "method": "bdev_nvme_attach_controller", 00:20:09.791 "req_id": 1 00:20:09.791 } 00:20:09.791 Got JSON-RPC error response 00:20:09.791 response: 00:20:09.791 { 00:20:09.791 "code": -5, 00:20:09.791 "message": "Input/output error" 00:20:09.791 } 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:09.791 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:09.792 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:09.792 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:09.792 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.792 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:09.792 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.792 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:09.792 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:09.792 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:10.050 request: 00:20:10.050 { 00:20:10.050 "name": "nvme0", 00:20:10.050 "trtype": "tcp", 00:20:10.050 "traddr": "10.0.0.2", 00:20:10.050 "adrfam": "ipv4", 00:20:10.050 "trsvcid": "4420", 00:20:10.050 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:10.050 "prchk_reftag": false, 00:20:10.050 "prchk_guard": false, 00:20:10.050 "hdgst": false, 00:20:10.050 "ddgst": false, 00:20:10.050 "dhchap_key": "key1", 00:20:10.050 "dhchap_ctrlr_key": "ckey2", 00:20:10.050 "allow_unrecognized_csi": false, 00:20:10.050 "method": "bdev_nvme_attach_controller", 00:20:10.050 "req_id": 1 00:20:10.050 } 00:20:10.050 Got JSON-RPC error response 00:20:10.050 response: 00:20:10.050 { 00:20:10.050 "code": -5, 00:20:10.050 "message": "Input/output error" 00:20:10.050 } 00:20:10.050 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:10.050 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:10.050 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:10.050 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:10.050 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:10.050 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.050 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.309 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.567 request: 00:20:10.567 { 00:20:10.567 "name": "nvme0", 00:20:10.567 "trtype": "tcp", 00:20:10.567 "traddr": "10.0.0.2", 00:20:10.567 "adrfam": "ipv4", 00:20:10.567 "trsvcid": "4420", 00:20:10.568 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:10.568 "prchk_reftag": false, 00:20:10.568 "prchk_guard": false, 00:20:10.568 "hdgst": false, 00:20:10.568 "ddgst": false, 00:20:10.568 "dhchap_key": "key1", 00:20:10.568 "dhchap_ctrlr_key": "ckey1", 00:20:10.568 "allow_unrecognized_csi": false, 00:20:10.568 "method": "bdev_nvme_attach_controller", 00:20:10.568 "req_id": 1 00:20:10.568 } 00:20:10.568 Got JSON-RPC error response 00:20:10.568 response: 00:20:10.568 { 00:20:10.568 "code": -5, 00:20:10.568 "message": "Input/output error" 00:20:10.568 } 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3910121 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3910121 ']' 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3910121 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.568 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3910121 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3910121' 00:20:10.827 killing process with pid 3910121 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3910121 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3910121 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3931426 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3931426 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3931426 ']' 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.827 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3931426 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3931426 ']' 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.086 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.345 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.345 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:11.345 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:11.345 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.345 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.345 null0 00:20:11.345 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.345 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:11.345 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CGK 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.pYE ]] 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pYE 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dAB 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Kmd ]] 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kmd 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.v8K 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Z1q ]] 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z1q 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.603 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.575 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.604 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.170 nvme0n1 00:20:12.170 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.170 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.170 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.427 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.427 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.427 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.427 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.427 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.427 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.427 { 00:20:12.427 "cntlid": 1, 00:20:12.427 "qid": 0, 00:20:12.427 "state": "enabled", 00:20:12.427 "thread": "nvmf_tgt_poll_group_000", 00:20:12.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:12.427 "listen_address": { 00:20:12.427 "trtype": "TCP", 00:20:12.427 "adrfam": "IPv4", 00:20:12.427 "traddr": "10.0.0.2", 00:20:12.427 "trsvcid": "4420" 00:20:12.427 }, 00:20:12.427 "peer_address": { 00:20:12.427 "trtype": "TCP", 00:20:12.427 "adrfam": "IPv4", 00:20:12.427 "traddr": "10.0.0.1", 00:20:12.427 "trsvcid": "33902" 00:20:12.427 }, 00:20:12.427 "auth": { 00:20:12.427 "state": "completed", 00:20:12.427 "digest": "sha512", 00:20:12.427 "dhgroup": "ffdhe8192" 00:20:12.427 } 00:20:12.427 } 00:20:12.427 ]' 00:20:12.427 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.427 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.427 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.685 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.685 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.685 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.685 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.685 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.942 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:20:12.942 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:13.509 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.769 request: 00:20:13.769 { 00:20:13.769 "name": "nvme0", 00:20:13.769 "trtype": "tcp", 00:20:13.769 "traddr": "10.0.0.2", 00:20:13.769 "adrfam": "ipv4", 00:20:13.769 "trsvcid": "4420", 00:20:13.769 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:13.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:13.769 "prchk_reftag": false, 00:20:13.769 "prchk_guard": false, 00:20:13.769 "hdgst": false, 00:20:13.769 "ddgst": false, 00:20:13.769 "dhchap_key": "key3", 00:20:13.769 "allow_unrecognized_csi": false, 00:20:13.769 "method": "bdev_nvme_attach_controller", 00:20:13.769 "req_id": 1 00:20:13.769 } 00:20:13.769 Got JSON-RPC error response 00:20:13.769 response: 00:20:13.769 { 00:20:13.769 "code": -5, 00:20:13.769 "message": "Input/output error" 00:20:13.769 } 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:13.769 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:14.028 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:14.028 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:14.028 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:14.028 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:14.028 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.028 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:14.028 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.028 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.028 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.028 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.286 request: 00:20:14.287 { 00:20:14.287 "name": "nvme0", 00:20:14.287 "trtype": "tcp", 00:20:14.287 "traddr": "10.0.0.2", 00:20:14.287 "adrfam": "ipv4", 00:20:14.287 "trsvcid": "4420", 00:20:14.287 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:14.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:14.287 "prchk_reftag": false, 00:20:14.287 "prchk_guard": false, 00:20:14.287 "hdgst": false, 00:20:14.287 "ddgst": false, 00:20:14.287 "dhchap_key": "key3", 00:20:14.287 "allow_unrecognized_csi": false, 00:20:14.287 "method": "bdev_nvme_attach_controller", 00:20:14.287 "req_id": 1 00:20:14.287 } 00:20:14.287 Got JSON-RPC error response 00:20:14.287 response: 00:20:14.287 { 00:20:14.287 "code": -5, 00:20:14.287 "message": "Input/output error" 00:20:14.287 } 00:20:14.287 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:14.287 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:14.287 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:14.287 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:14.287 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:14.287 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:14.287 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:14.287 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:14.287 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:14.287 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:14.545 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:14.803 request: 00:20:14.803 { 00:20:14.803 "name": "nvme0", 00:20:14.803 "trtype": "tcp", 00:20:14.803 "traddr": "10.0.0.2", 00:20:14.803 "adrfam": "ipv4", 00:20:14.803 "trsvcid": "4420", 00:20:14.803 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:14.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:14.803 "prchk_reftag": false, 00:20:14.803 "prchk_guard": false, 00:20:14.803 "hdgst": false, 00:20:14.803 "ddgst": false, 00:20:14.803 "dhchap_key": "key0", 00:20:14.803 "dhchap_ctrlr_key": "key1", 00:20:14.803 "allow_unrecognized_csi": false, 00:20:14.803 "method": "bdev_nvme_attach_controller", 00:20:14.803 "req_id": 1 00:20:14.803 } 00:20:14.803 Got JSON-RPC error response 00:20:14.803 response: 00:20:14.803 { 00:20:14.803 "code": -5, 00:20:14.803 "message": "Input/output error" 00:20:14.803 } 00:20:14.803 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:14.803 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:14.803 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:14.803 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:14.803 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:14.803 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:14.803 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:15.062 nvme0n1 00:20:15.062 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:15.062 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:15.062 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.319 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.320 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.320 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.578 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:20:15.578 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.578 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.578 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.578 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:15.578 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:15.578 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:16.146 nvme0n1 00:20:16.146 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:16.146 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:16.146 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.405 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.405 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:16.405 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.405 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.405 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.405 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:16.405 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.405 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:16.691 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.691 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:20:16.691 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: --dhchap-ctrl-secret DHHC-1:03:Njc1MjBhYjA3OTc4NTQ2ODlkYThhNzU2NjNhN2UwNzRiYmFjMjgxNjk5YmMxYTE5YTM3MTE1YTIzZTlkNjE5NNWD/kI=: 00:20:17.257 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:17.257 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:17.257 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:17.257 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:17.257 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:17.257 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:17.257 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:17.257 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.258 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.516 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:17.516 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:17.516 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:17.516 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:17.516 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:17.516 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:17.516 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:17.516 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:17.516 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:17.516 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:17.774 request: 00:20:17.774 { 00:20:17.774 "name": "nvme0", 00:20:17.774 "trtype": "tcp", 00:20:17.774 "traddr": "10.0.0.2", 00:20:17.774 "adrfam": "ipv4", 00:20:17.774 "trsvcid": "4420", 00:20:17.774 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:17.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:17.774 "prchk_reftag": false, 00:20:17.774 "prchk_guard": false, 00:20:17.774 "hdgst": false, 00:20:17.774 "ddgst": false, 00:20:17.774 "dhchap_key": "key1", 00:20:17.774 "allow_unrecognized_csi": false, 00:20:17.774 "method": "bdev_nvme_attach_controller", 00:20:17.774 "req_id": 1 00:20:17.774 } 00:20:17.774 Got JSON-RPC error response 00:20:17.774 response: 00:20:17.774 { 00:20:17.774 "code": -5, 00:20:17.774 "message": "Input/output error" 00:20:17.774 } 00:20:18.032 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:18.032 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:18.032 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:18.032 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:18.032 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:18.032 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:18.032 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:18.598 nvme0n1 00:20:18.598 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:18.598 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:18.598 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.856 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.856 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.856 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.114 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:19.114 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.114 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.114 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.114 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:19.114 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:19.114 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:19.372 nvme0n1 00:20:19.372 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:19.372 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:19.372 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.629 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.629 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.629 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: '' 2s 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: ]] 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGRmOTZjOGM4YWI1NTJkNTVjNzE4YWI4MzllMzFjNTIJ0IyH: 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:19.887 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: 2s 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: ]] 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NjMyZGI0ZjhlZDY3MjJhMWZlM2I4ZWI4MjRhMGE5ODFiOTViMDMxZjU3MWE2NmY2F9YV3w==: 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:21.787 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:24.315 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:24.573 nvme0n1 00:20:24.573 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:24.573 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.573 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.573 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.573 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:24.573 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:25.140 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:25.140 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:25.140 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.400 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.400 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:25.400 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.400 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.400 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.400 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:25.400 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:25.659 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:26.225 request: 00:20:26.225 { 00:20:26.225 "name": "nvme0", 00:20:26.225 "dhchap_key": "key1", 00:20:26.225 "dhchap_ctrlr_key": "key3", 00:20:26.225 "method": "bdev_nvme_set_keys", 00:20:26.225 "req_id": 1 00:20:26.225 } 00:20:26.225 Got JSON-RPC error response 00:20:26.225 response: 00:20:26.225 { 00:20:26.225 "code": -13, 00:20:26.225 "message": "Permission denied" 00:20:26.225 } 00:20:26.225 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:26.225 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.225 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:26.225 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.225 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:26.225 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:26.225 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.483 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:26.483 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:27.417 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:27.417 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:27.417 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.677 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:27.677 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:27.677 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.677 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.677 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.677 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:27.677 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:27.677 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:28.244 nvme0n1 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:28.244 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:28.811 request: 00:20:28.811 { 00:20:28.811 "name": "nvme0", 00:20:28.811 "dhchap_key": "key2", 00:20:28.811 "dhchap_ctrlr_key": "key0", 00:20:28.811 "method": "bdev_nvme_set_keys", 00:20:28.811 "req_id": 1 00:20:28.811 } 00:20:28.811 Got JSON-RPC error response 00:20:28.811 response: 00:20:28.811 { 00:20:28.811 "code": -13, 00:20:28.811 "message": "Permission denied" 00:20:28.811 } 00:20:28.811 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:28.811 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:28.811 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:28.811 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:28.811 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:28.811 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.811 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:29.069 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:29.069 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:30.003 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:30.003 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:30.003 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3910141 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3910141 ']' 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3910141 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3910141 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3910141' 00:20:30.262 killing process with pid 3910141 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3910141 00:20:30.262 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3910141 00:20:30.521 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:30.521 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:30.521 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:30.521 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:30.521 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:30.521 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:30.521 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:30.521 rmmod nvme_tcp 00:20:30.521 rmmod nvme_fabrics 00:20:30.521 rmmod nvme_keyring 00:20:30.521 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:30.521 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:30.521 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:30.522 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3931426 ']' 00:20:30.522 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3931426 00:20:30.522 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3931426 ']' 00:20:30.522 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3931426 00:20:30.522 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:30.522 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.522 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3931426 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3931426' 00:20:30.781 killing process with pid 3931426 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3931426 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3931426 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.781 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.315 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:33.315 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.CGK /tmp/spdk.key-sha256.dAB /tmp/spdk.key-sha384.v8K /tmp/spdk.key-sha512.575 /tmp/spdk.key-sha512.pYE /tmp/spdk.key-sha384.Kmd /tmp/spdk.key-sha256.Z1q '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:33.315 00:20:33.315 real 2m31.550s 00:20:33.316 user 5m49.076s 00:20:33.316 sys 0m24.190s 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.316 ************************************ 00:20:33.316 END TEST nvmf_auth_target 00:20:33.316 ************************************ 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:33.316 ************************************ 00:20:33.316 START TEST nvmf_bdevio_no_huge 00:20:33.316 ************************************ 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:33.316 * Looking for test storage... 00:20:33.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:33.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.316 --rc genhtml_branch_coverage=1 00:20:33.316 --rc genhtml_function_coverage=1 00:20:33.316 --rc genhtml_legend=1 00:20:33.316 --rc geninfo_all_blocks=1 00:20:33.316 --rc geninfo_unexecuted_blocks=1 00:20:33.316 00:20:33.316 ' 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:33.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.316 --rc genhtml_branch_coverage=1 00:20:33.316 --rc genhtml_function_coverage=1 00:20:33.316 --rc genhtml_legend=1 00:20:33.316 --rc geninfo_all_blocks=1 00:20:33.316 --rc geninfo_unexecuted_blocks=1 00:20:33.316 00:20:33.316 ' 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:33.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.316 --rc genhtml_branch_coverage=1 00:20:33.316 --rc genhtml_function_coverage=1 00:20:33.316 --rc genhtml_legend=1 00:20:33.316 --rc geninfo_all_blocks=1 00:20:33.316 --rc geninfo_unexecuted_blocks=1 00:20:33.316 00:20:33.316 ' 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:33.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.316 --rc genhtml_branch_coverage=1 00:20:33.316 --rc genhtml_function_coverage=1 00:20:33.316 --rc genhtml_legend=1 00:20:33.316 --rc geninfo_all_blocks=1 00:20:33.316 --rc geninfo_unexecuted_blocks=1 00:20:33.316 00:20:33.316 ' 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.316 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:33.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:33.317 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.882 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:39.883 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:39.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:39.883 Found net devices under 0000:86:00.0: cvl_0_0 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:39.883 Found net devices under 0000:86:00.1: cvl_0_1 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:39.883 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:39.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:20:39.883 00:20:39.883 --- 10.0.0.2 ping statistics --- 00:20:39.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.883 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:39.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:20:39.884 00:20:39.884 --- 10.0.0.1 ping statistics --- 00:20:39.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.884 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3938783 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3938783 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3938783 ']' 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.884 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.884 [2024-11-19 10:48:28.878783] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:20:39.884 [2024-11-19 10:48:28.878825] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:39.884 [2024-11-19 10:48:28.959920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.884 [2024-11-19 10:48:29.006191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.884 [2024-11-19 10:48:29.006228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.884 [2024-11-19 10:48:29.006235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.884 [2024-11-19 10:48:29.006241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.884 [2024-11-19 10:48:29.006246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.884 [2024-11-19 10:48:29.007530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:39.884 [2024-11-19 10:48:29.007561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:39.884 [2024-11-19 10:48:29.007671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.884 [2024-11-19 10:48:29.007672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.142 [2024-11-19 10:48:29.764527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.142 Malloc0 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.142 [2024-11-19 10:48:29.808831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.142 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.143 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:40.143 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:40.143 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:40.143 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:40.143 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.143 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.143 { 00:20:40.143 "params": { 00:20:40.143 "name": "Nvme$subsystem", 00:20:40.143 "trtype": "$TEST_TRANSPORT", 00:20:40.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.143 "adrfam": "ipv4", 00:20:40.143 "trsvcid": "$NVMF_PORT", 00:20:40.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.143 "hdgst": ${hdgst:-false}, 00:20:40.143 "ddgst": ${ddgst:-false} 00:20:40.143 }, 00:20:40.143 "method": "bdev_nvme_attach_controller" 00:20:40.143 } 00:20:40.143 EOF 00:20:40.143 )") 00:20:40.143 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:40.143 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:40.143 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:40.143 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:40.143 "params": { 00:20:40.143 "name": "Nvme1", 00:20:40.143 "trtype": "tcp", 00:20:40.143 "traddr": "10.0.0.2", 00:20:40.143 "adrfam": "ipv4", 00:20:40.143 "trsvcid": "4420", 00:20:40.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.143 "hdgst": false, 00:20:40.143 "ddgst": false 00:20:40.143 }, 00:20:40.143 "method": "bdev_nvme_attach_controller" 00:20:40.143 }' 00:20:40.143 [2024-11-19 10:48:29.861579] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:20:40.143 [2024-11-19 10:48:29.861626] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3939029 ] 00:20:40.401 [2024-11-19 10:48:29.940133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:40.401 [2024-11-19 10:48:29.988501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.401 [2024-11-19 10:48:29.988609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.401 [2024-11-19 10:48:29.988610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.401 I/O targets: 00:20:40.401 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:40.401 00:20:40.401 00:20:40.401 CUnit - A unit testing framework for C - Version 2.1-3 00:20:40.401 http://cunit.sourceforge.net/ 00:20:40.401 00:20:40.401 00:20:40.401 Suite: bdevio tests on: Nvme1n1 00:20:40.658 Test: blockdev write read block ...passed 00:20:40.658 Test: blockdev write zeroes read block ...passed 00:20:40.658 Test: blockdev write zeroes read no split ...passed 00:20:40.658 Test: blockdev write zeroes read split ...passed 00:20:40.658 Test: blockdev write zeroes read split partial ...passed 00:20:40.658 Test: blockdev reset ...[2024-11-19 10:48:30.322498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:40.658 [2024-11-19 10:48:30.322561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac1920 (9): Bad file descriptor 00:20:40.916 [2024-11-19 10:48:30.457076] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:40.916 passed 00:20:40.916 Test: blockdev write read 8 blocks ...passed 00:20:40.916 Test: blockdev write read size > 128k ...passed 00:20:40.916 Test: blockdev write read invalid size ...passed 00:20:40.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:40.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:40.916 Test: blockdev write read max offset ...passed 00:20:40.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:40.916 Test: blockdev writev readv 8 blocks ...passed 00:20:40.916 Test: blockdev writev readv 30 x 1block ...passed 00:20:40.916 Test: blockdev writev readv block ...passed 00:20:40.916 Test: blockdev writev readv size > 128k ...passed 00:20:40.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:40.916 Test: blockdev comparev and writev ...[2024-11-19 10:48:30.627342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.916 [2024-11-19 10:48:30.627370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:40.916 [2024-11-19 10:48:30.627385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.916 [2024-11-19 10:48:30.627393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.916 [2024-11-19 10:48:30.627630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.916 [2024-11-19 10:48:30.627640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:40.916 [2024-11-19 10:48:30.627652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.916 [2024-11-19 10:48:30.627660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:40.916 [2024-11-19 10:48:30.627886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.916 [2024-11-19 10:48:30.627897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:40.916 [2024-11-19 10:48:30.627908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.916 [2024-11-19 10:48:30.627916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:40.916 [2024-11-19 10:48:30.628137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.916 [2024-11-19 10:48:30.628147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:40.916 [2024-11-19 10:48:30.628159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.916 [2024-11-19 10:48:30.628166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:40.916 passed 00:20:41.174 Test: blockdev nvme passthru rw ...passed 00:20:41.174 Test: blockdev nvme passthru vendor specific ...[2024-11-19 10:48:30.710510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:41.174 [2024-11-19 10:48:30.710528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:41.174 [2024-11-19 10:48:30.710635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:41.174 [2024-11-19 10:48:30.710645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:41.175 [2024-11-19 10:48:30.710759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:41.175 [2024-11-19 10:48:30.710768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:41.175 [2024-11-19 10:48:30.710886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:41.175 [2024-11-19 10:48:30.710895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:41.175 passed 00:20:41.175 Test: blockdev nvme admin passthru ...passed 00:20:41.175 Test: blockdev copy ...passed 00:20:41.175 00:20:41.175 Run Summary: Type Total Ran Passed Failed Inactive 00:20:41.175 suites 1 1 n/a 0 0 00:20:41.175 tests 23 23 23 0 0 00:20:41.175 asserts 152 152 152 0 n/a 00:20:41.175 00:20:41.175 Elapsed time = 1.220 seconds 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.433 rmmod nvme_tcp 00:20:41.433 rmmod nvme_fabrics 00:20:41.433 rmmod nvme_keyring 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3938783 ']' 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3938783 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3938783 ']' 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3938783 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3938783 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:41.433 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3938783' 00:20:41.434 killing process with pid 3938783 00:20:41.434 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3938783 00:20:41.434 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3938783 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.692 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:44.226 00:20:44.226 real 0m10.884s 00:20:44.226 user 0m13.531s 00:20:44.226 sys 0m5.381s 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:44.226 ************************************ 00:20:44.226 END TEST nvmf_bdevio_no_huge 00:20:44.226 ************************************ 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.226 ************************************ 00:20:44.226 START TEST nvmf_tls 00:20:44.226 ************************************ 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:44.226 * Looking for test storage... 00:20:44.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:44.226 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:44.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.227 --rc genhtml_branch_coverage=1 00:20:44.227 --rc genhtml_function_coverage=1 00:20:44.227 --rc genhtml_legend=1 00:20:44.227 --rc geninfo_all_blocks=1 00:20:44.227 --rc geninfo_unexecuted_blocks=1 00:20:44.227 00:20:44.227 ' 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:44.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.227 --rc genhtml_branch_coverage=1 00:20:44.227 --rc genhtml_function_coverage=1 00:20:44.227 --rc genhtml_legend=1 00:20:44.227 --rc geninfo_all_blocks=1 00:20:44.227 --rc geninfo_unexecuted_blocks=1 00:20:44.227 00:20:44.227 ' 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:44.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.227 --rc genhtml_branch_coverage=1 00:20:44.227 --rc genhtml_function_coverage=1 00:20:44.227 --rc genhtml_legend=1 00:20:44.227 --rc geninfo_all_blocks=1 00:20:44.227 --rc geninfo_unexecuted_blocks=1 00:20:44.227 00:20:44.227 ' 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:44.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.227 --rc genhtml_branch_coverage=1 00:20:44.227 --rc genhtml_function_coverage=1 00:20:44.227 --rc genhtml_legend=1 00:20:44.227 --rc geninfo_all_blocks=1 00:20:44.227 --rc geninfo_unexecuted_blocks=1 00:20:44.227 00:20:44.227 ' 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:44.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.227 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.799 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:50.800 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:50.800 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:50.800 Found net devices under 0000:86:00.0: cvl_0_0 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:50.800 Found net devices under 0000:86:00.1: cvl_0_1 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:50.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:20:50.800 00:20:50.800 --- 10.0.0.2 ping statistics --- 00:20:50.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.800 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:20:50.800 00:20:50.800 --- 10.0.0.1 ping statistics --- 00:20:50.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.800 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3942807 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3942807 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3942807 ']' 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.800 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.801 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.801 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.801 [2024-11-19 10:48:39.804882] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:20:50.801 [2024-11-19 10:48:39.804928] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.801 [2024-11-19 10:48:39.885223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.801 [2024-11-19 10:48:39.926120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.801 [2024-11-19 10:48:39.926158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.801 [2024-11-19 10:48:39.926165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.801 [2024-11-19 10:48:39.926171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.801 [2024-11-19 10:48:39.926176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.801 [2024-11-19 10:48:39.926743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.060 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.060 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:51.060 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.060 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.060 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.060 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.060 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:51.060 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:51.319 true 00:20:51.319 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.319 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:51.319 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:51.319 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:51.319 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:51.578 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.578 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:51.836 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:51.836 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:51.836 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:51.836 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.836 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:52.095 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:52.095 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:52.095 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:52.095 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.354 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:52.354 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:52.354 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:52.612 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.612 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:52.612 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:52.612 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:52.612 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:52.871 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.871 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.aQx3s3O2d2 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.iQTzkTNb4z 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aQx3s3O2d2 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.iQTzkTNb4z 00:20:53.130 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:53.389 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:53.664 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.aQx3s3O2d2 00:20:53.664 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aQx3s3O2d2 00:20:53.664 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:53.952 [2024-11-19 10:48:43.461870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.953 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:53.953 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:54.233 [2024-11-19 10:48:43.822789] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.233 [2024-11-19 10:48:43.823015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.233 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:54.493 malloc0 00:20:54.493 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:54.493 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aQx3s3O2d2 00:20:54.752 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:55.010 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aQx3s3O2d2 00:21:04.989 Initializing NVMe Controllers 00:21:04.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:04.989 Initialization complete. Launching workers. 00:21:04.989 ======================================================== 00:21:04.989 Latency(us) 00:21:04.990 Device Information : IOPS MiB/s Average min max 00:21:04.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16844.95 65.80 3799.43 775.03 5285.45 00:21:04.990 ======================================================== 00:21:04.990 Total : 16844.95 65.80 3799.43 775.03 5285.45 00:21:04.990 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aQx3s3O2d2 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aQx3s3O2d2 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3945170 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3945170 /var/tmp/bdevperf.sock 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3945170 ']' 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.990 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.990 [2024-11-19 10:48:54.725129] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:04.990 [2024-11-19 10:48:54.725181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945170 ] 00:21:05.248 [2024-11-19 10:48:54.797485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.248 [2024-11-19 10:48:54.839095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.248 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.248 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:05.248 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aQx3s3O2d2 00:21:05.506 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:05.765 [2024-11-19 10:48:55.298590] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.765 TLSTESTn1 00:21:05.765 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:05.765 Running I/O for 10 seconds... 00:21:08.078 5264.00 IOPS, 20.56 MiB/s [2024-11-19T09:48:58.805Z] 5473.00 IOPS, 21.38 MiB/s [2024-11-19T09:48:59.740Z] 5532.00 IOPS, 21.61 MiB/s [2024-11-19T09:49:00.678Z] 5565.50 IOPS, 21.74 MiB/s [2024-11-19T09:49:01.615Z] 5556.80 IOPS, 21.71 MiB/s [2024-11-19T09:49:02.551Z] 5579.33 IOPS, 21.79 MiB/s [2024-11-19T09:49:03.927Z] 5563.43 IOPS, 21.73 MiB/s [2024-11-19T09:49:04.865Z] 5560.75 IOPS, 21.72 MiB/s [2024-11-19T09:49:05.801Z] 5559.67 IOPS, 21.72 MiB/s [2024-11-19T09:49:05.802Z] 5572.20 IOPS, 21.77 MiB/s 00:21:16.010 Latency(us) 00:21:16.010 [2024-11-19T09:49:05.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.010 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:16.010 Verification LBA range: start 0x0 length 0x2000 00:21:16.010 TLSTESTn1 : 10.02 5575.48 21.78 0.00 0.00 22922.20 5929.45 57671.68 00:21:16.010 [2024-11-19T09:49:05.802Z] =================================================================================================================== 00:21:16.010 [2024-11-19T09:49:05.802Z] Total : 5575.48 21.78 0.00 0.00 22922.20 5929.45 57671.68 00:21:16.010 { 00:21:16.010 "results": [ 00:21:16.010 { 00:21:16.010 "job": "TLSTESTn1", 00:21:16.010 "core_mask": "0x4", 00:21:16.010 "workload": "verify", 00:21:16.010 "status": "finished", 00:21:16.010 "verify_range": { 00:21:16.010 "start": 0, 00:21:16.010 "length": 8192 00:21:16.010 }, 00:21:16.010 "queue_depth": 128, 00:21:16.010 "io_size": 4096, 00:21:16.010 "runtime": 10.01708, 00:21:16.010 "iops": 5575.477085138583, 00:21:16.010 "mibps": 21.77920736382259, 00:21:16.010 "io_failed": 0, 00:21:16.010 "io_timeout": 0, 00:21:16.010 "avg_latency_us": 22922.199584806243, 00:21:16.010 "min_latency_us": 5929.447619047619, 00:21:16.010 "max_latency_us": 57671.68 00:21:16.010 } 00:21:16.010 ], 00:21:16.010 "core_count": 1 00:21:16.010 } 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3945170 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3945170 ']' 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3945170 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3945170 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3945170' 00:21:16.010 killing process with pid 3945170 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3945170 00:21:16.010 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.010 00:21:16.010 Latency(us) 00:21:16.010 [2024-11-19T09:49:05.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.010 [2024-11-19T09:49:05.802Z] =================================================================================================================== 00:21:16.010 [2024-11-19T09:49:05.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3945170 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iQTzkTNb4z 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iQTzkTNb4z 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iQTzkTNb4z 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iQTzkTNb4z 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3947002 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3947002 /var/tmp/bdevperf.sock 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3947002 ']' 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.010 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.269 [2024-11-19 10:49:05.802608] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:16.269 [2024-11-19 10:49:05.802658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947002 ] 00:21:16.269 [2024-11-19 10:49:05.862460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.269 [2024-11-19 10:49:05.899170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.269 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.269 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:16.269 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iQTzkTNb4z 00:21:16.527 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:16.787 [2024-11-19 10:49:06.361143] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.787 [2024-11-19 10:49:06.372345] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:16.787 [2024-11-19 10:49:06.372529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7f170 (107): Transport endpoint is not connected 00:21:16.787 [2024-11-19 10:49:06.373523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7f170 (9): Bad file descriptor 00:21:16.787 [2024-11-19 10:49:06.374524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:16.787 [2024-11-19 10:49:06.374533] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:16.787 [2024-11-19 10:49:06.374540] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:16.787 [2024-11-19 10:49:06.374550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:16.787 request: 00:21:16.787 { 00:21:16.787 "name": "TLSTEST", 00:21:16.787 "trtype": "tcp", 00:21:16.787 "traddr": "10.0.0.2", 00:21:16.787 "adrfam": "ipv4", 00:21:16.787 "trsvcid": "4420", 00:21:16.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.787 "prchk_reftag": false, 00:21:16.787 "prchk_guard": false, 00:21:16.787 "hdgst": false, 00:21:16.787 "ddgst": false, 00:21:16.787 "psk": "key0", 00:21:16.787 "allow_unrecognized_csi": false, 00:21:16.787 "method": "bdev_nvme_attach_controller", 00:21:16.787 "req_id": 1 00:21:16.787 } 00:21:16.787 Got JSON-RPC error response 00:21:16.787 response: 00:21:16.787 { 00:21:16.787 "code": -5, 00:21:16.787 "message": "Input/output error" 00:21:16.787 } 00:21:16.787 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3947002 00:21:16.787 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3947002 ']' 00:21:16.787 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3947002 00:21:16.787 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.787 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.787 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3947002 00:21:16.787 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:16.787 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:16.787 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3947002' 00:21:16.787 killing process with pid 3947002 00:21:16.787 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3947002 00:21:16.787 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.787 00:21:16.787 Latency(us) 00:21:16.787 [2024-11-19T09:49:06.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.787 [2024-11-19T09:49:06.579Z] =================================================================================================================== 00:21:16.787 [2024-11-19T09:49:06.579Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:16.787 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3947002 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aQx3s3O2d2 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aQx3s3O2d2 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aQx3s3O2d2 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aQx3s3O2d2 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3947230 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3947230 /var/tmp/bdevperf.sock 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3947230 ']' 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.047 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.047 [2024-11-19 10:49:06.656155] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:17.047 [2024-11-19 10:49:06.656207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947230 ] 00:21:17.047 [2024-11-19 10:49:06.712567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.047 [2024-11-19 10:49:06.755791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.306 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.306 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:17.306 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aQx3s3O2d2 00:21:17.306 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:17.567 [2024-11-19 10:49:07.193877] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.567 [2024-11-19 10:49:07.204364] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:17.567 [2024-11-19 10:49:07.204387] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:17.567 [2024-11-19 10:49:07.204409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:17.567 [2024-11-19 10:49:07.205227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1611170 (107): Transport endpoint is not connected 00:21:17.567 [2024-11-19 10:49:07.206221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1611170 (9): Bad file descriptor 00:21:17.567 [2024-11-19 10:49:07.207223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:17.567 [2024-11-19 10:49:07.207248] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:17.567 [2024-11-19 10:49:07.207255] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:17.567 [2024-11-19 10:49:07.207265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:17.567 request: 00:21:17.567 { 00:21:17.567 "name": "TLSTEST", 00:21:17.567 "trtype": "tcp", 00:21:17.567 "traddr": "10.0.0.2", 00:21:17.567 "adrfam": "ipv4", 00:21:17.567 "trsvcid": "4420", 00:21:17.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.567 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:17.567 "prchk_reftag": false, 00:21:17.567 "prchk_guard": false, 00:21:17.567 "hdgst": false, 00:21:17.567 "ddgst": false, 00:21:17.567 "psk": "key0", 00:21:17.567 "allow_unrecognized_csi": false, 00:21:17.567 "method": "bdev_nvme_attach_controller", 00:21:17.567 "req_id": 1 00:21:17.567 } 00:21:17.567 Got JSON-RPC error response 00:21:17.567 response: 00:21:17.567 { 00:21:17.567 "code": -5, 00:21:17.567 "message": "Input/output error" 00:21:17.567 } 00:21:17.567 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3947230 00:21:17.567 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3947230 ']' 00:21:17.567 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3947230 00:21:17.567 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.567 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.567 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3947230 00:21:17.567 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:17.567 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:17.567 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3947230' 00:21:17.567 killing process with pid 3947230 00:21:17.567 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3947230 00:21:17.567 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.567 00:21:17.567 Latency(us) 00:21:17.567 [2024-11-19T09:49:07.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.567 [2024-11-19T09:49:07.359Z] =================================================================================================================== 00:21:17.567 [2024-11-19T09:49:07.359Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:17.567 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3947230 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aQx3s3O2d2 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aQx3s3O2d2 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aQx3s3O2d2 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aQx3s3O2d2 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3947258 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3947258 /var/tmp/bdevperf.sock 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3947258 ']' 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.831 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.831 [2024-11-19 10:49:07.488123] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:17.831 [2024-11-19 10:49:07.488176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947258 ] 00:21:17.831 [2024-11-19 10:49:07.555099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.831 [2024-11-19 10:49:07.591849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.097 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.097 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:18.097 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aQx3s3O2d2 00:21:18.355 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:18.355 [2024-11-19 10:49:08.070423] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.355 [2024-11-19 10:49:08.076651] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:18.355 [2024-11-19 10:49:08.076672] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:18.355 [2024-11-19 10:49:08.076694] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:18.355 [2024-11-19 10:49:08.076830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb34170 (107): Transport endpoint is not connected 00:21:18.355 [2024-11-19 10:49:08.077825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb34170 (9): Bad file descriptor 00:21:18.355 [2024-11-19 10:49:08.078826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:18.356 [2024-11-19 10:49:08.078840] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:18.356 [2024-11-19 10:49:08.078847] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:18.356 [2024-11-19 10:49:08.078857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:18.356 request: 00:21:18.356 { 00:21:18.356 "name": "TLSTEST", 00:21:18.356 "trtype": "tcp", 00:21:18.356 "traddr": "10.0.0.2", 00:21:18.356 "adrfam": "ipv4", 00:21:18.356 "trsvcid": "4420", 00:21:18.356 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:18.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.356 "prchk_reftag": false, 00:21:18.356 "prchk_guard": false, 00:21:18.356 "hdgst": false, 00:21:18.356 "ddgst": false, 00:21:18.356 "psk": "key0", 00:21:18.356 "allow_unrecognized_csi": false, 00:21:18.356 "method": "bdev_nvme_attach_controller", 00:21:18.356 "req_id": 1 00:21:18.356 } 00:21:18.356 Got JSON-RPC error response 00:21:18.356 response: 00:21:18.356 { 00:21:18.356 "code": -5, 00:21:18.356 "message": "Input/output error" 00:21:18.356 } 00:21:18.356 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3947258 00:21:18.356 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3947258 ']' 00:21:18.356 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3947258 00:21:18.356 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:18.356 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.356 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3947258 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3947258' 00:21:18.615 killing process with pid 3947258 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3947258 00:21:18.615 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.615 00:21:18.615 Latency(us) 00:21:18.615 [2024-11-19T09:49:08.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.615 [2024-11-19T09:49:08.407Z] =================================================================================================================== 00:21:18.615 [2024-11-19T09:49:08.407Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3947258 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:18.615 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3947484 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3947484 /var/tmp/bdevperf.sock 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3947484 ']' 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.616 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.616 [2024-11-19 10:49:08.360537] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:18.616 [2024-11-19 10:49:08.360593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947484 ] 00:21:18.874 [2024-11-19 10:49:08.424327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.874 [2024-11-19 10:49:08.463877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.874 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.874 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:18.874 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:19.134 [2024-11-19 10:49:08.725069] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:19.134 [2024-11-19 10:49:08.725100] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:19.134 request: 00:21:19.134 { 00:21:19.134 "name": "key0", 00:21:19.134 "path": "", 00:21:19.134 "method": "keyring_file_add_key", 00:21:19.134 "req_id": 1 00:21:19.134 } 00:21:19.134 Got JSON-RPC error response 00:21:19.134 response: 00:21:19.134 { 00:21:19.134 "code": -1, 00:21:19.134 "message": "Operation not permitted" 00:21:19.134 } 00:21:19.134 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:19.134 [2024-11-19 10:49:08.921677] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.134 [2024-11-19 10:49:08.921704] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:19.394 request: 00:21:19.394 { 00:21:19.394 "name": "TLSTEST", 00:21:19.394 "trtype": "tcp", 00:21:19.394 "traddr": "10.0.0.2", 00:21:19.394 "adrfam": "ipv4", 00:21:19.394 "trsvcid": "4420", 00:21:19.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:19.394 "prchk_reftag": false, 00:21:19.394 "prchk_guard": false, 00:21:19.394 "hdgst": false, 00:21:19.394 "ddgst": false, 00:21:19.394 "psk": "key0", 00:21:19.394 "allow_unrecognized_csi": false, 00:21:19.394 "method": "bdev_nvme_attach_controller", 00:21:19.394 "req_id": 1 00:21:19.394 } 00:21:19.394 Got JSON-RPC error response 00:21:19.394 response: 00:21:19.394 { 00:21:19.394 "code": -126, 00:21:19.394 "message": "Required key not available" 00:21:19.394 } 00:21:19.394 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3947484 00:21:19.394 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3947484 ']' 00:21:19.394 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3947484 00:21:19.394 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:19.394 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.394 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3947484 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3947484' 00:21:19.394 killing process with pid 3947484 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3947484 00:21:19.394 Received shutdown signal, test time was about 10.000000 seconds 00:21:19.394 00:21:19.394 Latency(us) 00:21:19.394 [2024-11-19T09:49:09.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.394 [2024-11-19T09:49:09.186Z] =================================================================================================================== 00:21:19.394 [2024-11-19T09:49:09.186Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3947484 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3942807 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3942807 ']' 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3942807 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.394 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3942807 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3942807' 00:21:19.660 killing process with pid 3942807 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3942807 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3942807 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.fPResapnIR 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.fPResapnIR 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:19.660 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.661 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.661 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.661 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3947721 00:21:19.661 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3947721 00:21:19.661 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:19.661 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3947721 ']' 00:21:19.661 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.661 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.661 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.661 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.661 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.921 [2024-11-19 10:49:09.479009] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:19.921 [2024-11-19 10:49:09.479060] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.921 [2024-11-19 10:49:09.560230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.921 [2024-11-19 10:49:09.598338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.921 [2024-11-19 10:49:09.598373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.921 [2024-11-19 10:49:09.598379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.921 [2024-11-19 10:49:09.598386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.921 [2024-11-19 10:49:09.598391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.921 [2024-11-19 10:49:09.598944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.921 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.921 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:19.921 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.921 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.921 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.180 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.180 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.fPResapnIR 00:21:20.180 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fPResapnIR 00:21:20.180 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:20.180 [2024-11-19 10:49:09.906095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.180 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:20.439 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:20.698 [2024-11-19 10:49:10.303125] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:20.698 [2024-11-19 10:49:10.303350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.698 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:20.957 malloc0 00:21:20.957 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:20.957 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fPResapnIR 00:21:21.215 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fPResapnIR 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fPResapnIR 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3947989 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3947989 /var/tmp/bdevperf.sock 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3947989 ']' 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.474 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.474 [2024-11-19 10:49:11.180222] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:21.474 [2024-11-19 10:49:11.180273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947989 ] 00:21:21.474 [2024-11-19 10:49:11.251838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.733 [2024-11-19 10:49:11.292503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.733 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.733 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:21.733 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fPResapnIR 00:21:21.992 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:21.992 [2024-11-19 10:49:11.751209] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.251 TLSTESTn1 00:21:22.251 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:22.251 Running I/O for 10 seconds... 00:21:24.562 5389.00 IOPS, 21.05 MiB/s [2024-11-19T09:49:15.289Z] 5422.00 IOPS, 21.18 MiB/s [2024-11-19T09:49:16.224Z] 5456.00 IOPS, 21.31 MiB/s [2024-11-19T09:49:17.159Z] 5511.00 IOPS, 21.53 MiB/s [2024-11-19T09:49:18.095Z] 5492.20 IOPS, 21.45 MiB/s [2024-11-19T09:49:19.031Z] 5515.17 IOPS, 21.54 MiB/s [2024-11-19T09:49:19.967Z] 5535.86 IOPS, 21.62 MiB/s [2024-11-19T09:49:21.343Z] 5547.00 IOPS, 21.67 MiB/s [2024-11-19T09:49:22.279Z] 5550.11 IOPS, 21.68 MiB/s [2024-11-19T09:49:22.279Z] 5558.60 IOPS, 21.71 MiB/s 00:21:32.487 Latency(us) 00:21:32.487 [2024-11-19T09:49:22.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.487 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:32.487 Verification LBA range: start 0x0 length 0x2000 00:21:32.487 TLSTESTn1 : 10.01 5562.91 21.73 0.00 0.00 22976.22 5492.54 24341.94 00:21:32.487 [2024-11-19T09:49:22.279Z] =================================================================================================================== 00:21:32.487 [2024-11-19T09:49:22.279Z] Total : 5562.91 21.73 0.00 0.00 22976.22 5492.54 24341.94 00:21:32.487 { 00:21:32.487 "results": [ 00:21:32.487 { 00:21:32.487 "job": "TLSTESTn1", 00:21:32.487 "core_mask": "0x4", 00:21:32.487 "workload": "verify", 00:21:32.487 "status": "finished", 00:21:32.487 "verify_range": { 00:21:32.487 "start": 0, 00:21:32.487 "length": 8192 00:21:32.487 }, 00:21:32.487 "queue_depth": 128, 00:21:32.487 "io_size": 4096, 00:21:32.487 "runtime": 10.014906, 00:21:32.487 "iops": 5562.907929440376, 00:21:32.487 "mibps": 21.73010909937647, 00:21:32.487 "io_failed": 0, 00:21:32.487 "io_timeout": 0, 00:21:32.487 "avg_latency_us": 22976.22049287492, 00:21:32.487 "min_latency_us": 5492.540952380952, 00:21:32.487 "max_latency_us": 24341.942857142858 00:21:32.487 } 00:21:32.487 ], 00:21:32.487 "core_count": 1 00:21:32.487 } 00:21:32.487 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:32.487 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3947989 00:21:32.487 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3947989 ']' 00:21:32.487 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3947989 00:21:32.487 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3947989 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3947989' 00:21:32.487 killing process with pid 3947989 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3947989 00:21:32.487 Received shutdown signal, test time was about 10.000000 seconds 00:21:32.487 00:21:32.487 Latency(us) 00:21:32.487 [2024-11-19T09:49:22.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.487 [2024-11-19T09:49:22.279Z] =================================================================================================================== 00:21:32.487 [2024-11-19T09:49:22.279Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3947989 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.fPResapnIR 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fPResapnIR 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fPResapnIR 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fPResapnIR 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fPResapnIR 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3949779 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3949779 /var/tmp/bdevperf.sock 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3949779 ']' 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.487 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.487 [2024-11-19 10:49:22.266955] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:32.487 [2024-11-19 10:49:22.267009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949779 ] 00:21:32.746 [2024-11-19 10:49:22.343168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.746 [2024-11-19 10:49:22.381336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.746 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.746 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:32.746 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fPResapnIR 00:21:33.005 [2024-11-19 10:49:22.642532] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fPResapnIR': 0100666 00:21:33.005 [2024-11-19 10:49:22.642566] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:33.005 request: 00:21:33.005 { 00:21:33.005 "name": "key0", 00:21:33.005 "path": "/tmp/tmp.fPResapnIR", 00:21:33.005 "method": "keyring_file_add_key", 00:21:33.005 "req_id": 1 00:21:33.005 } 00:21:33.005 Got JSON-RPC error response 00:21:33.005 response: 00:21:33.005 { 00:21:33.005 "code": -1, 00:21:33.005 "message": "Operation not permitted" 00:21:33.005 } 00:21:33.005 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:33.264 [2024-11-19 10:49:22.831099] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.264 [2024-11-19 10:49:22.831126] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:33.264 request: 00:21:33.264 { 00:21:33.264 "name": "TLSTEST", 00:21:33.264 "trtype": "tcp", 00:21:33.264 "traddr": "10.0.0.2", 00:21:33.264 "adrfam": "ipv4", 00:21:33.264 "trsvcid": "4420", 00:21:33.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:33.264 "prchk_reftag": false, 00:21:33.264 "prchk_guard": false, 00:21:33.264 "hdgst": false, 00:21:33.264 "ddgst": false, 00:21:33.264 "psk": "key0", 00:21:33.264 "allow_unrecognized_csi": false, 00:21:33.264 "method": "bdev_nvme_attach_controller", 00:21:33.264 "req_id": 1 00:21:33.264 } 00:21:33.264 Got JSON-RPC error response 00:21:33.264 response: 00:21:33.264 { 00:21:33.264 "code": -126, 00:21:33.264 "message": "Required key not available" 00:21:33.264 } 00:21:33.264 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3949779 00:21:33.264 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3949779 ']' 00:21:33.264 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3949779 00:21:33.264 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:33.264 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.264 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3949779 00:21:33.264 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:33.264 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:33.265 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3949779' 00:21:33.265 killing process with pid 3949779 00:21:33.265 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3949779 00:21:33.265 Received shutdown signal, test time was about 10.000000 seconds 00:21:33.265 00:21:33.265 Latency(us) 00:21:33.265 [2024-11-19T09:49:23.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.265 [2024-11-19T09:49:23.057Z] =================================================================================================================== 00:21:33.265 [2024-11-19T09:49:23.057Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:33.265 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3949779 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3947721 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3947721 ']' 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3947721 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3947721 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3947721' 00:21:33.525 killing process with pid 3947721 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3947721 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3947721 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3949854 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3949854 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3949854 ']' 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.525 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.784 [2024-11-19 10:49:23.332733] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:33.784 [2024-11-19 10:49:23.332786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.784 [2024-11-19 10:49:23.412132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.784 [2024-11-19 10:49:23.449330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.784 [2024-11-19 10:49:23.449365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.784 [2024-11-19 10:49:23.449372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.784 [2024-11-19 10:49:23.449378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.784 [2024-11-19 10:49:23.449383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.784 [2024-11-19 10:49:23.449960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.784 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.784 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:33.784 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:33.784 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:33.784 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.fPResapnIR 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.fPResapnIR 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.fPResapnIR 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fPResapnIR 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:34.043 [2024-11-19 10:49:23.768932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.043 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:34.301 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:34.561 [2024-11-19 10:49:24.157913] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:34.561 [2024-11-19 10:49:24.158118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.561 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:34.820 malloc0 00:21:34.820 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:34.820 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fPResapnIR 00:21:35.077 [2024-11-19 10:49:24.727303] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fPResapnIR': 0100666 00:21:35.077 [2024-11-19 10:49:24.727326] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:35.077 request: 00:21:35.077 { 00:21:35.077 "name": "key0", 00:21:35.077 "path": "/tmp/tmp.fPResapnIR", 00:21:35.077 "method": "keyring_file_add_key", 00:21:35.077 "req_id": 1 00:21:35.077 } 00:21:35.077 Got JSON-RPC error response 00:21:35.077 response: 00:21:35.077 { 00:21:35.077 "code": -1, 00:21:35.077 "message": "Operation not permitted" 00:21:35.077 } 00:21:35.077 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:35.336 [2024-11-19 10:49:24.931866] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:35.336 [2024-11-19 10:49:24.931902] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:35.336 request: 00:21:35.336 { 00:21:35.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.336 "host": "nqn.2016-06.io.spdk:host1", 00:21:35.336 "psk": "key0", 00:21:35.336 "method": "nvmf_subsystem_add_host", 00:21:35.336 "req_id": 1 00:21:35.336 } 00:21:35.336 Got JSON-RPC error response 00:21:35.336 response: 00:21:35.336 { 00:21:35.336 "code": -32603, 00:21:35.336 "message": "Internal error" 00:21:35.336 } 00:21:35.336 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:35.336 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.336 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.336 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.336 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3949854 00:21:35.336 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3949854 ']' 00:21:35.336 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3949854 00:21:35.336 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:35.336 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.336 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3949854 00:21:35.336 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:35.336 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:35.336 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3949854' 00:21:35.336 killing process with pid 3949854 00:21:35.336 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3949854 00:21:35.336 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3949854 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.fPResapnIR 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3950337 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3950337 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3950337 ']' 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.595 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.595 [2024-11-19 10:49:25.244468] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:35.595 [2024-11-19 10:49:25.244519] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.595 [2024-11-19 10:49:25.319673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.595 [2024-11-19 10:49:25.354322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.595 [2024-11-19 10:49:25.354359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.595 [2024-11-19 10:49:25.354366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.595 [2024-11-19 10:49:25.354372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.595 [2024-11-19 10:49:25.354377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.595 [2024-11-19 10:49:25.354924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.854 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.854 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:35.854 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.854 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.854 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.854 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.854 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.fPResapnIR 00:21:35.854 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fPResapnIR 00:21:35.854 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:36.114 [2024-11-19 10:49:25.661627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.114 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:36.114 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:36.373 [2024-11-19 10:49:26.050610] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:36.373 [2024-11-19 10:49:26.050809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.373 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:36.632 malloc0 00:21:36.632 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:36.891 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fPResapnIR 00:21:37.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:37.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3950591 00:21:37.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:37.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:37.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3950591 /var/tmp/bdevperf.sock 00:21:37.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3950591 ']' 00:21:37.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.150 [2024-11-19 10:49:26.927053] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:37.150 [2024-11-19 10:49:26.927103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950591 ] 00:21:37.410 [2024-11-19 10:49:27.001224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.410 [2024-11-19 10:49:27.040915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.410 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.410 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:37.410 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fPResapnIR 00:21:37.668 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:37.926 [2024-11-19 10:49:27.491620] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.926 TLSTESTn1 00:21:37.926 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:38.185 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:38.185 "subsystems": [ 00:21:38.185 { 00:21:38.185 "subsystem": "keyring", 00:21:38.185 "config": [ 00:21:38.185 { 00:21:38.185 "method": "keyring_file_add_key", 00:21:38.185 "params": { 00:21:38.185 "name": "key0", 00:21:38.185 "path": "/tmp/tmp.fPResapnIR" 00:21:38.185 } 00:21:38.185 } 00:21:38.185 ] 00:21:38.185 }, 00:21:38.185 { 00:21:38.185 "subsystem": "iobuf", 00:21:38.185 "config": [ 00:21:38.185 { 00:21:38.185 "method": "iobuf_set_options", 00:21:38.185 "params": { 00:21:38.185 "small_pool_count": 8192, 00:21:38.185 "large_pool_count": 1024, 00:21:38.185 "small_bufsize": 8192, 00:21:38.185 "large_bufsize": 135168, 00:21:38.186 "enable_numa": false 00:21:38.186 } 00:21:38.186 } 00:21:38.186 ] 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "subsystem": "sock", 00:21:38.186 "config": [ 00:21:38.186 { 00:21:38.186 "method": "sock_set_default_impl", 00:21:38.186 "params": { 00:21:38.186 "impl_name": "posix" 00:21:38.186 } 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "method": "sock_impl_set_options", 00:21:38.186 "params": { 00:21:38.186 "impl_name": "ssl", 00:21:38.186 "recv_buf_size": 4096, 00:21:38.186 "send_buf_size": 4096, 00:21:38.186 "enable_recv_pipe": true, 00:21:38.186 "enable_quickack": false, 00:21:38.186 "enable_placement_id": 0, 00:21:38.186 "enable_zerocopy_send_server": true, 00:21:38.186 "enable_zerocopy_send_client": false, 00:21:38.186 "zerocopy_threshold": 0, 00:21:38.186 "tls_version": 0, 00:21:38.186 "enable_ktls": false 00:21:38.186 } 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "method": "sock_impl_set_options", 00:21:38.186 "params": { 00:21:38.186 "impl_name": "posix", 00:21:38.186 "recv_buf_size": 2097152, 00:21:38.186 "send_buf_size": 2097152, 00:21:38.186 "enable_recv_pipe": true, 00:21:38.186 "enable_quickack": false, 00:21:38.186 "enable_placement_id": 0, 00:21:38.186 "enable_zerocopy_send_server": true, 00:21:38.186 "enable_zerocopy_send_client": false, 00:21:38.186 "zerocopy_threshold": 0, 00:21:38.186 "tls_version": 0, 00:21:38.186 "enable_ktls": false 00:21:38.186 } 00:21:38.186 } 00:21:38.186 ] 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "subsystem": "vmd", 00:21:38.186 "config": [] 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "subsystem": "accel", 00:21:38.186 "config": [ 00:21:38.186 { 00:21:38.186 "method": "accel_set_options", 00:21:38.186 "params": { 00:21:38.186 "small_cache_size": 128, 00:21:38.186 "large_cache_size": 16, 00:21:38.186 "task_count": 2048, 00:21:38.186 "sequence_count": 2048, 00:21:38.186 "buf_count": 2048 00:21:38.186 } 00:21:38.186 } 00:21:38.186 ] 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "subsystem": "bdev", 00:21:38.186 "config": [ 00:21:38.186 { 00:21:38.186 "method": "bdev_set_options", 00:21:38.186 "params": { 00:21:38.186 "bdev_io_pool_size": 65535, 00:21:38.186 "bdev_io_cache_size": 256, 00:21:38.186 "bdev_auto_examine": true, 00:21:38.186 "iobuf_small_cache_size": 128, 00:21:38.186 "iobuf_large_cache_size": 16 00:21:38.186 } 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "method": "bdev_raid_set_options", 00:21:38.186 "params": { 00:21:38.186 "process_window_size_kb": 1024, 00:21:38.186 "process_max_bandwidth_mb_sec": 0 00:21:38.186 } 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "method": "bdev_iscsi_set_options", 00:21:38.186 "params": { 00:21:38.186 "timeout_sec": 30 00:21:38.186 } 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "method": "bdev_nvme_set_options", 00:21:38.186 "params": { 00:21:38.186 "action_on_timeout": "none", 00:21:38.186 "timeout_us": 0, 00:21:38.186 "timeout_admin_us": 0, 00:21:38.186 "keep_alive_timeout_ms": 10000, 00:21:38.186 "arbitration_burst": 0, 00:21:38.186 "low_priority_weight": 0, 00:21:38.186 "medium_priority_weight": 0, 00:21:38.186 "high_priority_weight": 0, 00:21:38.186 "nvme_adminq_poll_period_us": 10000, 00:21:38.186 "nvme_ioq_poll_period_us": 0, 00:21:38.186 "io_queue_requests": 0, 00:21:38.186 "delay_cmd_submit": true, 00:21:38.186 "transport_retry_count": 4, 00:21:38.186 "bdev_retry_count": 3, 00:21:38.186 "transport_ack_timeout": 0, 00:21:38.186 "ctrlr_loss_timeout_sec": 0, 00:21:38.186 "reconnect_delay_sec": 0, 00:21:38.186 "fast_io_fail_timeout_sec": 0, 00:21:38.186 "disable_auto_failback": false, 00:21:38.186 "generate_uuids": false, 00:21:38.186 "transport_tos": 0, 00:21:38.186 "nvme_error_stat": false, 00:21:38.186 "rdma_srq_size": 0, 00:21:38.186 "io_path_stat": false, 00:21:38.186 "allow_accel_sequence": false, 00:21:38.186 "rdma_max_cq_size": 0, 00:21:38.186 "rdma_cm_event_timeout_ms": 0, 00:21:38.186 "dhchap_digests": [ 00:21:38.186 "sha256", 00:21:38.186 "sha384", 00:21:38.186 "sha512" 00:21:38.186 ], 00:21:38.186 "dhchap_dhgroups": [ 00:21:38.186 "null", 00:21:38.186 "ffdhe2048", 00:21:38.186 "ffdhe3072", 00:21:38.186 "ffdhe4096", 00:21:38.186 "ffdhe6144", 00:21:38.186 "ffdhe8192" 00:21:38.186 ] 00:21:38.186 } 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "method": "bdev_nvme_set_hotplug", 00:21:38.186 "params": { 00:21:38.186 "period_us": 100000, 00:21:38.186 "enable": false 00:21:38.186 } 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "method": "bdev_malloc_create", 00:21:38.186 "params": { 00:21:38.186 "name": "malloc0", 00:21:38.186 "num_blocks": 8192, 00:21:38.186 "block_size": 4096, 00:21:38.186 "physical_block_size": 4096, 00:21:38.186 "uuid": "cc04ddf7-95db-4155-94dc-f91e4aa9efff", 00:21:38.186 "optimal_io_boundary": 0, 00:21:38.186 "md_size": 0, 00:21:38.186 "dif_type": 0, 00:21:38.186 "dif_is_head_of_md": false, 00:21:38.186 "dif_pi_format": 0 00:21:38.186 } 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "method": "bdev_wait_for_examine" 00:21:38.186 } 00:21:38.186 ] 00:21:38.186 }, 00:21:38.186 { 00:21:38.186 "subsystem": "nbd", 00:21:38.186 "config": [] 00:21:38.186 }, 00:21:38.186 { 00:21:38.187 "subsystem": "scheduler", 00:21:38.187 "config": [ 00:21:38.187 { 00:21:38.187 "method": "framework_set_scheduler", 00:21:38.187 "params": { 00:21:38.187 "name": "static" 00:21:38.187 } 00:21:38.187 } 00:21:38.187 ] 00:21:38.187 }, 00:21:38.187 { 00:21:38.187 "subsystem": "nvmf", 00:21:38.187 "config": [ 00:21:38.187 { 00:21:38.187 "method": "nvmf_set_config", 00:21:38.187 "params": { 00:21:38.187 "discovery_filter": "match_any", 00:21:38.187 "admin_cmd_passthru": { 00:21:38.187 "identify_ctrlr": false 00:21:38.187 }, 00:21:38.187 "dhchap_digests": [ 00:21:38.187 "sha256", 00:21:38.187 "sha384", 00:21:38.187 "sha512" 00:21:38.187 ], 00:21:38.187 "dhchap_dhgroups": [ 00:21:38.187 "null", 00:21:38.187 "ffdhe2048", 00:21:38.187 "ffdhe3072", 00:21:38.187 "ffdhe4096", 00:21:38.187 "ffdhe6144", 00:21:38.187 "ffdhe8192" 00:21:38.187 ] 00:21:38.187 } 00:21:38.187 }, 00:21:38.187 { 00:21:38.187 "method": "nvmf_set_max_subsystems", 00:21:38.187 "params": { 00:21:38.187 "max_subsystems": 1024 00:21:38.187 } 00:21:38.187 }, 00:21:38.187 { 00:21:38.187 "method": "nvmf_set_crdt", 00:21:38.187 "params": { 00:21:38.187 "crdt1": 0, 00:21:38.187 "crdt2": 0, 00:21:38.187 "crdt3": 0 00:21:38.187 } 00:21:38.187 }, 00:21:38.187 { 00:21:38.187 "method": "nvmf_create_transport", 00:21:38.187 "params": { 00:21:38.187 "trtype": "TCP", 00:21:38.187 "max_queue_depth": 128, 00:21:38.187 "max_io_qpairs_per_ctrlr": 127, 00:21:38.187 "in_capsule_data_size": 4096, 00:21:38.187 "max_io_size": 131072, 00:21:38.187 "io_unit_size": 131072, 00:21:38.187 "max_aq_depth": 128, 00:21:38.187 "num_shared_buffers": 511, 00:21:38.187 "buf_cache_size": 4294967295, 00:21:38.187 "dif_insert_or_strip": false, 00:21:38.187 "zcopy": false, 00:21:38.187 "c2h_success": false, 00:21:38.187 "sock_priority": 0, 00:21:38.187 "abort_timeout_sec": 1, 00:21:38.187 "ack_timeout": 0, 00:21:38.187 "data_wr_pool_size": 0 00:21:38.187 } 00:21:38.187 }, 00:21:38.187 { 00:21:38.187 "method": "nvmf_create_subsystem", 00:21:38.187 "params": { 00:21:38.187 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.187 "allow_any_host": false, 00:21:38.187 "serial_number": "SPDK00000000000001", 00:21:38.187 "model_number": "SPDK bdev Controller", 00:21:38.187 "max_namespaces": 10, 00:21:38.187 "min_cntlid": 1, 00:21:38.187 "max_cntlid": 65519, 00:21:38.187 "ana_reporting": false 00:21:38.187 } 00:21:38.187 }, 00:21:38.187 { 00:21:38.187 "method": "nvmf_subsystem_add_host", 00:21:38.187 "params": { 00:21:38.187 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.187 "host": "nqn.2016-06.io.spdk:host1", 00:21:38.187 "psk": "key0" 00:21:38.187 } 00:21:38.187 }, 00:21:38.187 { 00:21:38.187 "method": "nvmf_subsystem_add_ns", 00:21:38.187 "params": { 00:21:38.187 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.187 "namespace": { 00:21:38.187 "nsid": 1, 00:21:38.187 "bdev_name": "malloc0", 00:21:38.187 "nguid": "CC04DDF795DB415594DCF91E4AA9EFFF", 00:21:38.187 "uuid": "cc04ddf7-95db-4155-94dc-f91e4aa9efff", 00:21:38.187 "no_auto_visible": false 00:21:38.187 } 00:21:38.187 } 00:21:38.187 }, 00:21:38.187 { 00:21:38.187 "method": "nvmf_subsystem_add_listener", 00:21:38.187 "params": { 00:21:38.187 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.187 "listen_address": { 00:21:38.187 "trtype": "TCP", 00:21:38.187 "adrfam": "IPv4", 00:21:38.187 "traddr": "10.0.0.2", 00:21:38.187 "trsvcid": "4420" 00:21:38.187 }, 00:21:38.187 "secure_channel": true 00:21:38.187 } 00:21:38.187 } 00:21:38.187 ] 00:21:38.187 } 00:21:38.187 ] 00:21:38.187 }' 00:21:38.187 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:38.446 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:38.446 "subsystems": [ 00:21:38.446 { 00:21:38.446 "subsystem": "keyring", 00:21:38.446 "config": [ 00:21:38.446 { 00:21:38.446 "method": "keyring_file_add_key", 00:21:38.446 "params": { 00:21:38.446 "name": "key0", 00:21:38.446 "path": "/tmp/tmp.fPResapnIR" 00:21:38.446 } 00:21:38.446 } 00:21:38.446 ] 00:21:38.446 }, 00:21:38.446 { 00:21:38.446 "subsystem": "iobuf", 00:21:38.446 "config": [ 00:21:38.446 { 00:21:38.446 "method": "iobuf_set_options", 00:21:38.446 "params": { 00:21:38.446 "small_pool_count": 8192, 00:21:38.446 "large_pool_count": 1024, 00:21:38.446 "small_bufsize": 8192, 00:21:38.446 "large_bufsize": 135168, 00:21:38.446 "enable_numa": false 00:21:38.446 } 00:21:38.446 } 00:21:38.446 ] 00:21:38.446 }, 00:21:38.446 { 00:21:38.446 "subsystem": "sock", 00:21:38.446 "config": [ 00:21:38.446 { 00:21:38.446 "method": "sock_set_default_impl", 00:21:38.446 "params": { 00:21:38.446 "impl_name": "posix" 00:21:38.446 } 00:21:38.446 }, 00:21:38.446 { 00:21:38.446 "method": "sock_impl_set_options", 00:21:38.446 "params": { 00:21:38.446 "impl_name": "ssl", 00:21:38.446 "recv_buf_size": 4096, 00:21:38.446 "send_buf_size": 4096, 00:21:38.446 "enable_recv_pipe": true, 00:21:38.446 "enable_quickack": false, 00:21:38.446 "enable_placement_id": 0, 00:21:38.446 "enable_zerocopy_send_server": true, 00:21:38.446 "enable_zerocopy_send_client": false, 00:21:38.446 "zerocopy_threshold": 0, 00:21:38.446 "tls_version": 0, 00:21:38.446 "enable_ktls": false 00:21:38.446 } 00:21:38.446 }, 00:21:38.446 { 00:21:38.446 "method": "sock_impl_set_options", 00:21:38.446 "params": { 00:21:38.446 "impl_name": "posix", 00:21:38.446 "recv_buf_size": 2097152, 00:21:38.446 "send_buf_size": 2097152, 00:21:38.446 "enable_recv_pipe": true, 00:21:38.446 "enable_quickack": false, 00:21:38.446 "enable_placement_id": 0, 00:21:38.446 "enable_zerocopy_send_server": true, 00:21:38.446 "enable_zerocopy_send_client": false, 00:21:38.446 "zerocopy_threshold": 0, 00:21:38.446 "tls_version": 0, 00:21:38.446 "enable_ktls": false 00:21:38.446 } 00:21:38.446 } 00:21:38.446 ] 00:21:38.446 }, 00:21:38.446 { 00:21:38.446 "subsystem": "vmd", 00:21:38.446 "config": [] 00:21:38.446 }, 00:21:38.446 { 00:21:38.446 "subsystem": "accel", 00:21:38.446 "config": [ 00:21:38.446 { 00:21:38.446 "method": "accel_set_options", 00:21:38.446 "params": { 00:21:38.446 "small_cache_size": 128, 00:21:38.446 "large_cache_size": 16, 00:21:38.446 "task_count": 2048, 00:21:38.446 "sequence_count": 2048, 00:21:38.446 "buf_count": 2048 00:21:38.446 } 00:21:38.446 } 00:21:38.446 ] 00:21:38.446 }, 00:21:38.447 { 00:21:38.447 "subsystem": "bdev", 00:21:38.447 "config": [ 00:21:38.447 { 00:21:38.447 "method": "bdev_set_options", 00:21:38.447 "params": { 00:21:38.447 "bdev_io_pool_size": 65535, 00:21:38.447 "bdev_io_cache_size": 256, 00:21:38.447 "bdev_auto_examine": true, 00:21:38.447 "iobuf_small_cache_size": 128, 00:21:38.447 "iobuf_large_cache_size": 16 00:21:38.447 } 00:21:38.447 }, 00:21:38.447 { 00:21:38.447 "method": "bdev_raid_set_options", 00:21:38.447 "params": { 00:21:38.447 "process_window_size_kb": 1024, 00:21:38.447 "process_max_bandwidth_mb_sec": 0 00:21:38.447 } 00:21:38.447 }, 00:21:38.447 { 00:21:38.447 "method": "bdev_iscsi_set_options", 00:21:38.447 "params": { 00:21:38.447 "timeout_sec": 30 00:21:38.447 } 00:21:38.447 }, 00:21:38.447 { 00:21:38.447 "method": "bdev_nvme_set_options", 00:21:38.447 "params": { 00:21:38.447 "action_on_timeout": "none", 00:21:38.447 "timeout_us": 0, 00:21:38.447 "timeout_admin_us": 0, 00:21:38.447 "keep_alive_timeout_ms": 10000, 00:21:38.447 "arbitration_burst": 0, 00:21:38.447 "low_priority_weight": 0, 00:21:38.447 "medium_priority_weight": 0, 00:21:38.447 "high_priority_weight": 0, 00:21:38.447 "nvme_adminq_poll_period_us": 10000, 00:21:38.447 "nvme_ioq_poll_period_us": 0, 00:21:38.447 "io_queue_requests": 512, 00:21:38.447 "delay_cmd_submit": true, 00:21:38.447 "transport_retry_count": 4, 00:21:38.447 "bdev_retry_count": 3, 00:21:38.447 "transport_ack_timeout": 0, 00:21:38.447 "ctrlr_loss_timeout_sec": 0, 00:21:38.447 "reconnect_delay_sec": 0, 00:21:38.447 "fast_io_fail_timeout_sec": 0, 00:21:38.447 "disable_auto_failback": false, 00:21:38.447 "generate_uuids": false, 00:21:38.447 "transport_tos": 0, 00:21:38.447 "nvme_error_stat": false, 00:21:38.447 "rdma_srq_size": 0, 00:21:38.447 "io_path_stat": false, 00:21:38.447 "allow_accel_sequence": false, 00:21:38.447 "rdma_max_cq_size": 0, 00:21:38.447 "rdma_cm_event_timeout_ms": 0, 00:21:38.447 "dhchap_digests": [ 00:21:38.447 "sha256", 00:21:38.447 "sha384", 00:21:38.447 "sha512" 00:21:38.447 ], 00:21:38.447 "dhchap_dhgroups": [ 00:21:38.447 "null", 00:21:38.447 "ffdhe2048", 00:21:38.447 "ffdhe3072", 00:21:38.447 "ffdhe4096", 00:21:38.447 "ffdhe6144", 00:21:38.447 "ffdhe8192" 00:21:38.447 ] 00:21:38.447 } 00:21:38.447 }, 00:21:38.447 { 00:21:38.447 "method": "bdev_nvme_attach_controller", 00:21:38.447 "params": { 00:21:38.447 "name": "TLSTEST", 00:21:38.447 "trtype": "TCP", 00:21:38.447 "adrfam": "IPv4", 00:21:38.447 "traddr": "10.0.0.2", 00:21:38.447 "trsvcid": "4420", 00:21:38.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.447 "prchk_reftag": false, 00:21:38.447 "prchk_guard": false, 00:21:38.447 "ctrlr_loss_timeout_sec": 0, 00:21:38.447 "reconnect_delay_sec": 0, 00:21:38.447 "fast_io_fail_timeout_sec": 0, 00:21:38.447 "psk": "key0", 00:21:38.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.447 "hdgst": false, 00:21:38.447 "ddgst": false, 00:21:38.447 "multipath": "multipath" 00:21:38.447 } 00:21:38.447 }, 00:21:38.447 { 00:21:38.447 "method": "bdev_nvme_set_hotplug", 00:21:38.447 "params": { 00:21:38.447 "period_us": 100000, 00:21:38.447 "enable": false 00:21:38.447 } 00:21:38.447 }, 00:21:38.447 { 00:21:38.447 "method": "bdev_wait_for_examine" 00:21:38.447 } 00:21:38.447 ] 00:21:38.447 }, 00:21:38.447 { 00:21:38.447 "subsystem": "nbd", 00:21:38.447 "config": [] 00:21:38.447 } 00:21:38.447 ] 00:21:38.447 }' 00:21:38.447 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3950591 00:21:38.447 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3950591 ']' 00:21:38.447 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3950591 00:21:38.447 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:38.447 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.447 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3950591 00:21:38.447 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:38.447 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:38.447 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3950591' 00:21:38.447 killing process with pid 3950591 00:21:38.447 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3950591 00:21:38.447 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.447 00:21:38.447 Latency(us) 00:21:38.447 [2024-11-19T09:49:28.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.447 [2024-11-19T09:49:28.239Z] =================================================================================================================== 00:21:38.447 [2024-11-19T09:49:28.239Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:38.447 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3950591 00:21:38.707 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3950337 00:21:38.707 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3950337 ']' 00:21:38.707 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3950337 00:21:38.707 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:38.707 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.707 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3950337 00:21:38.707 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:38.707 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:38.707 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3950337' 00:21:38.707 killing process with pid 3950337 00:21:38.707 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3950337 00:21:38.707 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3950337 00:21:38.967 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:38.967 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.967 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:38.967 "subsystems": [ 00:21:38.967 { 00:21:38.967 "subsystem": "keyring", 00:21:38.967 "config": [ 00:21:38.967 { 00:21:38.967 "method": "keyring_file_add_key", 00:21:38.967 "params": { 00:21:38.967 "name": "key0", 00:21:38.967 "path": "/tmp/tmp.fPResapnIR" 00:21:38.967 } 00:21:38.967 } 00:21:38.967 ] 00:21:38.967 }, 00:21:38.967 { 00:21:38.967 "subsystem": "iobuf", 00:21:38.967 "config": [ 00:21:38.967 { 00:21:38.967 "method": "iobuf_set_options", 00:21:38.967 "params": { 00:21:38.967 "small_pool_count": 8192, 00:21:38.967 "large_pool_count": 1024, 00:21:38.967 "small_bufsize": 8192, 00:21:38.967 "large_bufsize": 135168, 00:21:38.967 "enable_numa": false 00:21:38.967 } 00:21:38.967 } 00:21:38.967 ] 00:21:38.967 }, 00:21:38.967 { 00:21:38.967 "subsystem": "sock", 00:21:38.967 "config": [ 00:21:38.967 { 00:21:38.967 "method": "sock_set_default_impl", 00:21:38.967 "params": { 00:21:38.967 "impl_name": "posix" 00:21:38.967 } 00:21:38.967 }, 00:21:38.967 { 00:21:38.967 "method": "sock_impl_set_options", 00:21:38.967 "params": { 00:21:38.967 "impl_name": "ssl", 00:21:38.967 "recv_buf_size": 4096, 00:21:38.967 "send_buf_size": 4096, 00:21:38.967 "enable_recv_pipe": true, 00:21:38.967 "enable_quickack": false, 00:21:38.967 "enable_placement_id": 0, 00:21:38.967 "enable_zerocopy_send_server": true, 00:21:38.967 "enable_zerocopy_send_client": false, 00:21:38.967 "zerocopy_threshold": 0, 00:21:38.967 "tls_version": 0, 00:21:38.967 "enable_ktls": false 00:21:38.967 } 00:21:38.967 }, 00:21:38.967 { 00:21:38.967 "method": "sock_impl_set_options", 00:21:38.967 "params": { 00:21:38.967 "impl_name": "posix", 00:21:38.967 "recv_buf_size": 2097152, 00:21:38.967 "send_buf_size": 2097152, 00:21:38.967 "enable_recv_pipe": true, 00:21:38.967 "enable_quickack": false, 00:21:38.967 "enable_placement_id": 0, 00:21:38.967 "enable_zerocopy_send_server": true, 00:21:38.967 "enable_zerocopy_send_client": false, 00:21:38.967 "zerocopy_threshold": 0, 00:21:38.967 "tls_version": 0, 00:21:38.967 "enable_ktls": false 00:21:38.967 } 00:21:38.967 } 00:21:38.967 ] 00:21:38.967 }, 00:21:38.967 { 00:21:38.967 "subsystem": "vmd", 00:21:38.967 "config": [] 00:21:38.967 }, 00:21:38.967 { 00:21:38.967 "subsystem": "accel", 00:21:38.967 "config": [ 00:21:38.967 { 00:21:38.967 "method": "accel_set_options", 00:21:38.967 "params": { 00:21:38.967 "small_cache_size": 128, 00:21:38.967 "large_cache_size": 16, 00:21:38.967 "task_count": 2048, 00:21:38.967 "sequence_count": 2048, 00:21:38.967 "buf_count": 2048 00:21:38.967 } 00:21:38.967 } 00:21:38.967 ] 00:21:38.967 }, 00:21:38.967 { 00:21:38.967 "subsystem": "bdev", 00:21:38.967 "config": [ 00:21:38.967 { 00:21:38.967 "method": "bdev_set_options", 00:21:38.967 "params": { 00:21:38.967 "bdev_io_pool_size": 65535, 00:21:38.967 "bdev_io_cache_size": 256, 00:21:38.967 "bdev_auto_examine": true, 00:21:38.967 "iobuf_small_cache_size": 128, 00:21:38.967 "iobuf_large_cache_size": 16 00:21:38.967 } 00:21:38.967 }, 00:21:38.967 { 00:21:38.967 "method": "bdev_raid_set_options", 00:21:38.967 "params": { 00:21:38.967 "process_window_size_kb": 1024, 00:21:38.967 "process_max_bandwidth_mb_sec": 0 00:21:38.967 } 00:21:38.967 }, 00:21:38.968 { 00:21:38.968 "method": "bdev_iscsi_set_options", 00:21:38.968 "params": { 00:21:38.968 "timeout_sec": 30 00:21:38.968 } 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "method": "bdev_nvme_set_options", 00:21:38.968 "params": { 00:21:38.968 "action_on_timeout": "none", 00:21:38.968 "timeout_us": 0, 00:21:38.968 "timeout_admin_us": 0, 00:21:38.968 "keep_alive_timeout_ms": 10000, 00:21:38.968 "arbitration_burst": 0, 00:21:38.968 "low_priority_weight": 0, 00:21:38.968 "medium_priority_weight": 0, 00:21:38.968 "high_priority_weight": 0, 00:21:38.968 "nvme_adminq_poll_period_us": 10000, 00:21:38.968 "nvme_ioq_poll_period_us": 0, 00:21:38.968 "io_queue_requests": 0, 00:21:38.968 "delay_cmd_submit": true, 00:21:38.968 "transport_retry_count": 4, 00:21:38.968 "bdev_retry_count": 3, 00:21:38.968 "transport_ack_timeout": 0, 00:21:38.968 "ctrlr_loss_timeout_sec": 0, 00:21:38.968 "reconnect_delay_sec": 0, 00:21:38.968 "fast_io_fail_timeout_sec": 0, 00:21:38.968 "disable_auto_failback": false, 00:21:38.968 "generate_uuids": false, 00:21:38.968 "transport_tos": 0, 00:21:38.968 "nvme_error_stat": false, 00:21:38.968 "rdma_srq_size": 0, 00:21:38.968 "io_path_stat": false, 00:21:38.968 "allow_accel_sequence": false, 00:21:38.968 "rdma_max_cq_size": 0, 00:21:38.968 "rdma_cm_event_timeout_ms": 0, 00:21:38.968 "dhchap_digests": [ 00:21:38.968 "sha256", 00:21:38.968 "sha384", 00:21:38.968 "sha512" 00:21:38.968 ], 00:21:38.968 "dhchap_dhgroups": [ 00:21:38.968 "null", 00:21:38.968 "ffdhe2048", 00:21:38.968 "ffdhe3072", 00:21:38.968 "ffdhe4096", 00:21:38.968 "ffdhe6144", 00:21:38.968 "ffdhe8192" 00:21:38.968 ] 00:21:38.968 } 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "method": "bdev_nvme_set_hotplug", 00:21:38.968 "params": { 00:21:38.968 "period_us": 100000, 00:21:38.968 "enable": false 00:21:38.968 } 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "method": "bdev_malloc_create", 00:21:38.968 "params": { 00:21:38.968 "name": "malloc0", 00:21:38.968 "num_blocks": 8192, 00:21:38.968 "block_size": 4096, 00:21:38.968 "physical_block_size": 4096, 00:21:38.968 "uuid": "cc04ddf7-95db-4155-94dc-f91e4aa9efff", 00:21:38.968 "optimal_io_boundary": 0, 00:21:38.968 "md_size": 0, 00:21:38.968 "dif_type": 0, 00:21:38.968 "dif_is_head_of_md": false, 00:21:38.968 "dif_pi_format": 0 00:21:38.968 } 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "method": "bdev_wait_for_examine" 00:21:38.968 } 00:21:38.968 ] 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "subsystem": "nbd", 00:21:38.968 "config": [] 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "subsystem": "scheduler", 00:21:38.968 "config": [ 00:21:38.968 { 00:21:38.968 "method": "framework_set_scheduler", 00:21:38.968 "params": { 00:21:38.968 "name": "static" 00:21:38.968 } 00:21:38.968 } 00:21:38.968 ] 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "subsystem": "nvmf", 00:21:38.968 "config": [ 00:21:38.968 { 00:21:38.968 "method": "nvmf_set_config", 00:21:38.968 "params": { 00:21:38.968 "discovery_filter": "match_any", 00:21:38.968 "admin_cmd_passthru": { 00:21:38.968 "identify_ctrlr": false 00:21:38.968 }, 00:21:38.968 "dhchap_digests": [ 00:21:38.968 "sha256", 00:21:38.968 "sha384", 00:21:38.968 "sha512" 00:21:38.968 ], 00:21:38.968 "dhchap_dhgroups": [ 00:21:38.968 "null", 00:21:38.968 "ffdhe2048", 00:21:38.968 "ffdhe3072", 00:21:38.968 "ffdhe4096", 00:21:38.968 "ffdhe6144", 00:21:38.968 "ffdhe8192" 00:21:38.968 ] 00:21:38.968 } 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "method": "nvmf_set_max_subsystems", 00:21:38.968 "params": { 00:21:38.968 "max_subsystems": 1024 00:21:38.968 } 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "method": "nvmf_set_crdt", 00:21:38.968 "params": { 00:21:38.968 "crdt1": 0, 00:21:38.968 "crdt2": 0, 00:21:38.968 "crdt3": 0 00:21:38.968 } 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "method": "nvmf_create_transport", 00:21:38.968 "params": { 00:21:38.968 "trtype": "TCP", 00:21:38.968 "max_queue_depth": 128, 00:21:38.968 "max_io_qpairs_per_ctrlr": 127, 00:21:38.968 "in_capsule_data_size": 4096, 00:21:38.968 "max_io_size": 131072, 00:21:38.968 "io_unit_size": 131072, 00:21:38.968 "max_aq_depth": 128, 00:21:38.968 "num_shared_buffers": 511, 00:21:38.968 "buf_cache_size": 4294967295, 00:21:38.968 "dif_insert_or_strip": false, 00:21:38.968 "zcopy": false, 00:21:38.968 "c2h_success": false, 00:21:38.968 "sock_priority": 0, 00:21:38.968 "abort_timeout_sec": 1, 00:21:38.968 "ack_timeout": 0, 00:21:38.968 "data_wr_pool_size": 0 00:21:38.968 } 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "method": "nvmf_create_subsystem", 00:21:38.968 "params": { 00:21:38.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.968 "allow_any_host": false, 00:21:38.968 "serial_number": "SPDK00000000000001", 00:21:38.968 "model_number": "SPDK bdev Controller", 00:21:38.968 "max_namespaces": 10, 00:21:38.968 "min_cntlid": 1, 00:21:38.968 "max_cntlid": 65519, 00:21:38.968 "ana_reporting": false 00:21:38.968 } 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "method": "nvmf_subsystem_add_host", 00:21:38.968 "params": { 00:21:38.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.968 "host": "nqn.2016-06.io.spdk:host1", 00:21:38.968 "psk": "key0" 00:21:38.968 } 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "method": "nvmf_subsystem_add_ns", 00:21:38.968 "params": { 00:21:38.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.968 "namespace": { 00:21:38.968 "nsid": 1, 00:21:38.968 "bdev_name": "malloc0", 00:21:38.968 "nguid": "CC04DDF795DB415594DCF91E4AA9EFFF", 00:21:38.968 "uuid": "cc04ddf7-95db-4155-94dc-f91e4aa9efff", 00:21:38.968 "no_auto_visible": false 00:21:38.968 } 00:21:38.968 } 00:21:38.968 }, 00:21:38.968 { 00:21:38.968 "method": "nvmf_subsystem_add_listener", 00:21:38.968 "params": { 00:21:38.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.968 "listen_address": { 00:21:38.968 "trtype": "TCP", 00:21:38.968 "adrfam": "IPv4", 00:21:38.968 "traddr": "10.0.0.2", 00:21:38.968 "trsvcid": "4420" 00:21:38.968 }, 00:21:38.968 "secure_channel": true 00:21:38.968 } 00:21:38.968 } 00:21:38.968 ] 00:21:38.968 } 00:21:38.968 ] 00:21:38.968 }' 00:21:38.968 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.968 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.968 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3950844 00:21:38.968 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:38.968 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3950844 00:21:38.969 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3950844 ']' 00:21:38.969 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.969 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.969 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.969 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.969 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.969 [2024-11-19 10:49:28.619255] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:38.969 [2024-11-19 10:49:28.619304] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.969 [2024-11-19 10:49:28.678923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.969 [2024-11-19 10:49:28.719475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.969 [2024-11-19 10:49:28.719507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.969 [2024-11-19 10:49:28.719514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.969 [2024-11-19 10:49:28.719523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.969 [2024-11-19 10:49:28.719528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.969 [2024-11-19 10:49:28.720124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.228 [2024-11-19 10:49:28.932414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.228 [2024-11-19 10:49:28.964438] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.228 [2024-11-19 10:49:28.964657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3951086 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3951086 /var/tmp/bdevperf.sock 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3951086 ']' 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.798 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:39.798 "subsystems": [ 00:21:39.798 { 00:21:39.798 "subsystem": "keyring", 00:21:39.798 "config": [ 00:21:39.798 { 00:21:39.798 "method": "keyring_file_add_key", 00:21:39.798 "params": { 00:21:39.799 "name": "key0", 00:21:39.799 "path": "/tmp/tmp.fPResapnIR" 00:21:39.799 } 00:21:39.799 } 00:21:39.799 ] 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "subsystem": "iobuf", 00:21:39.799 "config": [ 00:21:39.799 { 00:21:39.799 "method": "iobuf_set_options", 00:21:39.799 "params": { 00:21:39.799 "small_pool_count": 8192, 00:21:39.799 "large_pool_count": 1024, 00:21:39.799 "small_bufsize": 8192, 00:21:39.799 "large_bufsize": 135168, 00:21:39.799 "enable_numa": false 00:21:39.799 } 00:21:39.799 } 00:21:39.799 ] 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "subsystem": "sock", 00:21:39.799 "config": [ 00:21:39.799 { 00:21:39.799 "method": "sock_set_default_impl", 00:21:39.799 "params": { 00:21:39.799 "impl_name": "posix" 00:21:39.799 } 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "method": "sock_impl_set_options", 00:21:39.799 "params": { 00:21:39.799 "impl_name": "ssl", 00:21:39.799 "recv_buf_size": 4096, 00:21:39.799 "send_buf_size": 4096, 00:21:39.799 "enable_recv_pipe": true, 00:21:39.799 "enable_quickack": false, 00:21:39.799 "enable_placement_id": 0, 00:21:39.799 "enable_zerocopy_send_server": true, 00:21:39.799 "enable_zerocopy_send_client": false, 00:21:39.799 "zerocopy_threshold": 0, 00:21:39.799 "tls_version": 0, 00:21:39.799 "enable_ktls": false 00:21:39.799 } 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "method": "sock_impl_set_options", 00:21:39.799 "params": { 00:21:39.799 "impl_name": "posix", 00:21:39.799 "recv_buf_size": 2097152, 00:21:39.799 "send_buf_size": 2097152, 00:21:39.799 "enable_recv_pipe": true, 00:21:39.799 "enable_quickack": false, 00:21:39.799 "enable_placement_id": 0, 00:21:39.799 "enable_zerocopy_send_server": true, 00:21:39.799 "enable_zerocopy_send_client": false, 00:21:39.799 "zerocopy_threshold": 0, 00:21:39.799 "tls_version": 0, 00:21:39.799 "enable_ktls": false 00:21:39.799 } 00:21:39.799 } 00:21:39.799 ] 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "subsystem": "vmd", 00:21:39.799 "config": [] 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "subsystem": "accel", 00:21:39.799 "config": [ 00:21:39.799 { 00:21:39.799 "method": "accel_set_options", 00:21:39.799 "params": { 00:21:39.799 "small_cache_size": 128, 00:21:39.799 "large_cache_size": 16, 00:21:39.799 "task_count": 2048, 00:21:39.799 "sequence_count": 2048, 00:21:39.799 "buf_count": 2048 00:21:39.799 } 00:21:39.799 } 00:21:39.799 ] 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "subsystem": "bdev", 00:21:39.799 "config": [ 00:21:39.799 { 00:21:39.799 "method": "bdev_set_options", 00:21:39.799 "params": { 00:21:39.799 "bdev_io_pool_size": 65535, 00:21:39.799 "bdev_io_cache_size": 256, 00:21:39.799 "bdev_auto_examine": true, 00:21:39.799 "iobuf_small_cache_size": 128, 00:21:39.799 "iobuf_large_cache_size": 16 00:21:39.799 } 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "method": "bdev_raid_set_options", 00:21:39.799 "params": { 00:21:39.799 "process_window_size_kb": 1024, 00:21:39.799 "process_max_bandwidth_mb_sec": 0 00:21:39.799 } 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "method": "bdev_iscsi_set_options", 00:21:39.799 "params": { 00:21:39.799 "timeout_sec": 30 00:21:39.799 } 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "method": "bdev_nvme_set_options", 00:21:39.799 "params": { 00:21:39.799 "action_on_timeout": "none", 00:21:39.799 "timeout_us": 0, 00:21:39.799 "timeout_admin_us": 0, 00:21:39.799 "keep_alive_timeout_ms": 10000, 00:21:39.799 "arbitration_burst": 0, 00:21:39.799 "low_priority_weight": 0, 00:21:39.799 "medium_priority_weight": 0, 00:21:39.799 "high_priority_weight": 0, 00:21:39.799 "nvme_adminq_poll_period_us": 10000, 00:21:39.799 "nvme_ioq_poll_period_us": 0, 00:21:39.799 "io_queue_requests": 512, 00:21:39.799 "delay_cmd_submit": true, 00:21:39.799 "transport_retry_count": 4, 00:21:39.799 "bdev_retry_count": 3, 00:21:39.799 "transport_ack_timeout": 0, 00:21:39.799 "ctrlr_loss_timeout_sec": 0, 00:21:39.799 "reconnect_delay_sec": 0, 00:21:39.799 "fast_io_fail_timeout_sec": 0, 00:21:39.799 "disable_auto_failback": false, 00:21:39.799 "generate_uuids": false, 00:21:39.799 "transport_tos": 0, 00:21:39.799 "nvme_error_stat": false, 00:21:39.799 "rdma_srq_size": 0, 00:21:39.799 "io_path_stat": false, 00:21:39.799 "allow_accel_sequence": false, 00:21:39.799 "rdma_max_cq_size": 0, 00:21:39.799 "rdma_cm_event_timeout_ms": 0, 00:21:39.799 "dhchap_digests": [ 00:21:39.799 "sha256", 00:21:39.799 "sha384", 00:21:39.799 "sha512" 00:21:39.799 ], 00:21:39.799 "dhchap_dhgroups": [ 00:21:39.799 "null", 00:21:39.799 "ffdhe2048", 00:21:39.799 "ffdhe3072", 00:21:39.799 "ffdhe4096", 00:21:39.799 "ffdhe6144", 00:21:39.799 "ffdhe8192" 00:21:39.799 ] 00:21:39.799 } 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "method": "bdev_nvme_attach_controller", 00:21:39.799 "params": { 00:21:39.799 "name": "TLSTEST", 00:21:39.799 "trtype": "TCP", 00:21:39.799 "adrfam": "IPv4", 00:21:39.799 "traddr": "10.0.0.2", 00:21:39.799 "trsvcid": "4420", 00:21:39.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.799 "prchk_reftag": false, 00:21:39.799 "prchk_guard": false, 00:21:39.799 "ctrlr_loss_timeout_sec": 0, 00:21:39.799 "reconnect_delay_sec": 0, 00:21:39.799 "fast_io_fail_timeout_sec": 0, 00:21:39.799 "psk": "key0", 00:21:39.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.799 "hdgst": false, 00:21:39.799 "ddgst": false, 00:21:39.799 "multipath": "multipath" 00:21:39.799 } 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "method": "bdev_nvme_set_hotplug", 00:21:39.799 "params": { 00:21:39.799 "period_us": 100000, 00:21:39.799 "enable": false 00:21:39.799 } 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "method": "bdev_wait_for_examine" 00:21:39.799 } 00:21:39.799 ] 00:21:39.799 }, 00:21:39.799 { 00:21:39.799 "subsystem": "nbd", 00:21:39.799 "config": [] 00:21:39.799 } 00:21:39.799 ] 00:21:39.799 }' 00:21:39.799 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.799 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.799 [2024-11-19 10:49:29.535571] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:39.799 [2024-11-19 10:49:29.535621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951086 ] 00:21:40.082 [2024-11-19 10:49:29.613073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.082 [2024-11-19 10:49:29.652432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.082 [2024-11-19 10:49:29.803048] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.691 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.691 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:40.692 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:40.692 Running I/O for 10 seconds... 00:21:43.004 5486.00 IOPS, 21.43 MiB/s [2024-11-19T09:49:33.733Z] 5509.50 IOPS, 21.52 MiB/s [2024-11-19T09:49:34.668Z] 5508.00 IOPS, 21.52 MiB/s [2024-11-19T09:49:35.606Z] 5531.50 IOPS, 21.61 MiB/s [2024-11-19T09:49:36.542Z] 5552.40 IOPS, 21.69 MiB/s [2024-11-19T09:49:37.919Z] 5565.67 IOPS, 21.74 MiB/s [2024-11-19T09:49:38.855Z] 5557.86 IOPS, 21.71 MiB/s [2024-11-19T09:49:39.789Z] 5553.62 IOPS, 21.69 MiB/s [2024-11-19T09:49:40.727Z] 5553.22 IOPS, 21.69 MiB/s [2024-11-19T09:49:40.727Z] 5533.70 IOPS, 21.62 MiB/s 00:21:50.935 Latency(us) 00:21:50.935 [2024-11-19T09:49:40.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.935 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:50.935 Verification LBA range: start 0x0 length 0x2000 00:21:50.935 TLSTESTn1 : 10.02 5534.52 21.62 0.00 0.00 23087.34 5492.54 23093.64 00:21:50.935 [2024-11-19T09:49:40.727Z] =================================================================================================================== 00:21:50.935 [2024-11-19T09:49:40.727Z] Total : 5534.52 21.62 0.00 0.00 23087.34 5492.54 23093.64 00:21:50.935 { 00:21:50.935 "results": [ 00:21:50.935 { 00:21:50.935 "job": "TLSTESTn1", 00:21:50.935 "core_mask": "0x4", 00:21:50.935 "workload": "verify", 00:21:50.935 "status": "finished", 00:21:50.935 "verify_range": { 00:21:50.935 "start": 0, 00:21:50.935 "length": 8192 00:21:50.935 }, 00:21:50.935 "queue_depth": 128, 00:21:50.935 "io_size": 4096, 00:21:50.935 "runtime": 10.02146, 00:21:50.935 "iops": 5534.522913826928, 00:21:50.935 "mibps": 21.619230132136437, 00:21:50.935 "io_failed": 0, 00:21:50.935 "io_timeout": 0, 00:21:50.935 "avg_latency_us": 23087.343849773, 00:21:50.935 "min_latency_us": 5492.540952380952, 00:21:50.935 "max_latency_us": 23093.638095238097 00:21:50.935 } 00:21:50.935 ], 00:21:50.935 "core_count": 1 00:21:50.935 } 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3951086 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3951086 ']' 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3951086 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3951086 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3951086' 00:21:50.935 killing process with pid 3951086 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3951086 00:21:50.935 Received shutdown signal, test time was about 10.000000 seconds 00:21:50.935 00:21:50.935 Latency(us) 00:21:50.935 [2024-11-19T09:49:40.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.935 [2024-11-19T09:49:40.727Z] =================================================================================================================== 00:21:50.935 [2024-11-19T09:49:40.727Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.935 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3951086 00:21:51.194 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3950844 00:21:51.194 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3950844 ']' 00:21:51.194 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3950844 00:21:51.194 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:51.194 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.194 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3950844 00:21:51.194 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:51.194 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:51.194 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3950844' 00:21:51.194 killing process with pid 3950844 00:21:51.194 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3950844 00:21:51.194 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3950844 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3952937 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3952937 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3952937 ']' 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.195 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.454 [2024-11-19 10:49:41.018695] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:51.454 [2024-11-19 10:49:41.018744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.454 [2024-11-19 10:49:41.096832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.454 [2024-11-19 10:49:41.136527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.454 [2024-11-19 10:49:41.136564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.454 [2024-11-19 10:49:41.136570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.454 [2024-11-19 10:49:41.136576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.454 [2024-11-19 10:49:41.136581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.454 [2024-11-19 10:49:41.137154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.390 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.390 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:52.390 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.390 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.390 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.390 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.391 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.fPResapnIR 00:21:52.391 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fPResapnIR 00:21:52.391 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:52.391 [2024-11-19 10:49:42.053409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.391 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:52.649 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:52.908 [2024-11-19 10:49:42.458442] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:52.908 [2024-11-19 10:49:42.458651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.908 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:52.908 malloc0 00:21:52.908 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:53.167 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fPResapnIR 00:21:53.426 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:53.686 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:53.686 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3953216 00:21:53.686 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:53.686 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3953216 /var/tmp/bdevperf.sock 00:21:53.686 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3953216 ']' 00:21:53.686 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.686 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.686 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.686 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.686 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.686 [2024-11-19 10:49:43.284415] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:53.686 [2024-11-19 10:49:43.284467] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953216 ] 00:21:53.686 [2024-11-19 10:49:43.362259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.686 [2024-11-19 10:49:43.403093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.945 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.946 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:53.946 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fPResapnIR 00:21:53.946 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:54.205 [2024-11-19 10:49:43.870467] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.205 nvme0n1 00:21:54.205 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:54.464 Running I/O for 1 seconds... 00:21:55.403 5309.00 IOPS, 20.74 MiB/s 00:21:55.403 Latency(us) 00:21:55.403 [2024-11-19T09:49:45.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.403 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:55.403 Verification LBA range: start 0x0 length 0x2000 00:21:55.403 nvme0n1 : 1.02 5350.02 20.90 0.00 0.00 23759.19 7240.17 37698.80 00:21:55.403 [2024-11-19T09:49:45.195Z] =================================================================================================================== 00:21:55.403 [2024-11-19T09:49:45.195Z] Total : 5350.02 20.90 0.00 0.00 23759.19 7240.17 37698.80 00:21:55.403 { 00:21:55.403 "results": [ 00:21:55.403 { 00:21:55.403 "job": "nvme0n1", 00:21:55.403 "core_mask": "0x2", 00:21:55.403 "workload": "verify", 00:21:55.403 "status": "finished", 00:21:55.403 "verify_range": { 00:21:55.403 "start": 0, 00:21:55.403 "length": 8192 00:21:55.403 }, 00:21:55.403 "queue_depth": 128, 00:21:55.403 "io_size": 4096, 00:21:55.403 "runtime": 1.016445, 00:21:55.403 "iops": 5350.018938555456, 00:21:55.403 "mibps": 20.89851147873225, 00:21:55.403 "io_failed": 0, 00:21:55.403 "io_timeout": 0, 00:21:55.403 "avg_latency_us": 23759.190212087775, 00:21:55.403 "min_latency_us": 7240.167619047619, 00:21:55.403 "max_latency_us": 37698.80380952381 00:21:55.403 } 00:21:55.403 ], 00:21:55.403 "core_count": 1 00:21:55.403 } 00:21:55.403 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3953216 00:21:55.403 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3953216 ']' 00:21:55.403 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3953216 00:21:55.403 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:55.403 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.403 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3953216 00:21:55.403 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:55.403 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:55.403 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3953216' 00:21:55.403 killing process with pid 3953216 00:21:55.403 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3953216 00:21:55.403 Received shutdown signal, test time was about 1.000000 seconds 00:21:55.403 00:21:55.403 Latency(us) 00:21:55.403 [2024-11-19T09:49:45.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.403 [2024-11-19T09:49:45.195Z] =================================================================================================================== 00:21:55.403 [2024-11-19T09:49:45.195Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:55.403 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3953216 00:21:55.663 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3952937 00:21:55.663 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3952937 ']' 00:21:55.663 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3952937 00:21:55.663 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:55.663 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.663 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3952937 00:21:55.663 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:55.663 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:55.663 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3952937' 00:21:55.663 killing process with pid 3952937 00:21:55.663 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3952937 00:21:55.663 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3952937 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3953673 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3953673 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3953673 ']' 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.923 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.923 [2024-11-19 10:49:45.578760] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:55.923 [2024-11-19 10:49:45.578809] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.923 [2024-11-19 10:49:45.660419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.923 [2024-11-19 10:49:45.695556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.923 [2024-11-19 10:49:45.695591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.923 [2024-11-19 10:49:45.695598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.923 [2024-11-19 10:49:45.695603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.923 [2024-11-19 10:49:45.695608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.923 [2024-11-19 10:49:45.696193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.861 [2024-11-19 10:49:46.461024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.861 malloc0 00:21:56.861 [2024-11-19 10:49:46.489277] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:56.861 [2024-11-19 10:49:46.489483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3953919 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3953919 /var/tmp/bdevperf.sock 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3953919 ']' 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.861 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:56.861 [2024-11-19 10:49:46.565882] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:56.861 [2024-11-19 10:49:46.565926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953919 ] 00:21:56.861 [2024-11-19 10:49:46.641124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.120 [2024-11-19 10:49:46.685583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.120 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.120 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:57.120 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fPResapnIR 00:21:57.378 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:57.378 [2024-11-19 10:49:47.121958] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.637 nvme0n1 00:21:57.637 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:57.637 Running I/O for 1 seconds... 00:21:58.574 5415.00 IOPS, 21.15 MiB/s 00:21:58.574 Latency(us) 00:21:58.574 [2024-11-19T09:49:48.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.574 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:58.574 Verification LBA range: start 0x0 length 0x2000 00:21:58.574 nvme0n1 : 1.01 5475.87 21.39 0.00 0.00 23221.81 5118.05 28711.01 00:21:58.574 [2024-11-19T09:49:48.366Z] =================================================================================================================== 00:21:58.574 [2024-11-19T09:49:48.366Z] Total : 5475.87 21.39 0.00 0.00 23221.81 5118.05 28711.01 00:21:58.574 { 00:21:58.574 "results": [ 00:21:58.574 { 00:21:58.574 "job": "nvme0n1", 00:21:58.574 "core_mask": "0x2", 00:21:58.574 "workload": "verify", 00:21:58.574 "status": "finished", 00:21:58.574 "verify_range": { 00:21:58.574 "start": 0, 00:21:58.574 "length": 8192 00:21:58.574 }, 00:21:58.574 "queue_depth": 128, 00:21:58.574 "io_size": 4096, 00:21:58.574 "runtime": 1.012259, 00:21:58.574 "iops": 5475.871293809193, 00:21:58.574 "mibps": 21.39012224144216, 00:21:58.574 "io_failed": 0, 00:21:58.574 "io_timeout": 0, 00:21:58.574 "avg_latency_us": 23221.810033074747, 00:21:58.574 "min_latency_us": 5118.049523809524, 00:21:58.574 "max_latency_us": 28711.009523809524 00:21:58.574 } 00:21:58.574 ], 00:21:58.574 "core_count": 1 00:21:58.574 } 00:21:58.574 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:58.574 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.574 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.833 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.833 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:58.833 "subsystems": [ 00:21:58.833 { 00:21:58.833 "subsystem": "keyring", 00:21:58.833 "config": [ 00:21:58.833 { 00:21:58.834 "method": "keyring_file_add_key", 00:21:58.834 "params": { 00:21:58.834 "name": "key0", 00:21:58.834 "path": "/tmp/tmp.fPResapnIR" 00:21:58.834 } 00:21:58.834 } 00:21:58.834 ] 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "subsystem": "iobuf", 00:21:58.834 "config": [ 00:21:58.834 { 00:21:58.834 "method": "iobuf_set_options", 00:21:58.834 "params": { 00:21:58.834 "small_pool_count": 8192, 00:21:58.834 "large_pool_count": 1024, 00:21:58.834 "small_bufsize": 8192, 00:21:58.834 "large_bufsize": 135168, 00:21:58.834 "enable_numa": false 00:21:58.834 } 00:21:58.834 } 00:21:58.834 ] 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "subsystem": "sock", 00:21:58.834 "config": [ 00:21:58.834 { 00:21:58.834 "method": "sock_set_default_impl", 00:21:58.834 "params": { 00:21:58.834 "impl_name": "posix" 00:21:58.834 } 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "method": "sock_impl_set_options", 00:21:58.834 "params": { 00:21:58.834 "impl_name": "ssl", 00:21:58.834 "recv_buf_size": 4096, 00:21:58.834 "send_buf_size": 4096, 00:21:58.834 "enable_recv_pipe": true, 00:21:58.834 "enable_quickack": false, 00:21:58.834 "enable_placement_id": 0, 00:21:58.834 "enable_zerocopy_send_server": true, 00:21:58.834 "enable_zerocopy_send_client": false, 00:21:58.834 "zerocopy_threshold": 0, 00:21:58.834 "tls_version": 0, 00:21:58.834 "enable_ktls": false 00:21:58.834 } 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "method": "sock_impl_set_options", 00:21:58.834 "params": { 00:21:58.834 "impl_name": "posix", 00:21:58.834 "recv_buf_size": 2097152, 00:21:58.834 "send_buf_size": 2097152, 00:21:58.834 "enable_recv_pipe": true, 00:21:58.834 "enable_quickack": false, 00:21:58.834 "enable_placement_id": 0, 00:21:58.834 "enable_zerocopy_send_server": true, 00:21:58.834 "enable_zerocopy_send_client": false, 00:21:58.834 "zerocopy_threshold": 0, 00:21:58.834 "tls_version": 0, 00:21:58.834 "enable_ktls": false 00:21:58.834 } 00:21:58.834 } 00:21:58.834 ] 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "subsystem": "vmd", 00:21:58.834 "config": [] 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "subsystem": "accel", 00:21:58.834 "config": [ 00:21:58.834 { 00:21:58.834 "method": "accel_set_options", 00:21:58.834 "params": { 00:21:58.834 "small_cache_size": 128, 00:21:58.834 "large_cache_size": 16, 00:21:58.834 "task_count": 2048, 00:21:58.834 "sequence_count": 2048, 00:21:58.834 "buf_count": 2048 00:21:58.834 } 00:21:58.834 } 00:21:58.834 ] 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "subsystem": "bdev", 00:21:58.834 "config": [ 00:21:58.834 { 00:21:58.834 "method": "bdev_set_options", 00:21:58.834 "params": { 00:21:58.834 "bdev_io_pool_size": 65535, 00:21:58.834 "bdev_io_cache_size": 256, 00:21:58.834 "bdev_auto_examine": true, 00:21:58.834 "iobuf_small_cache_size": 128, 00:21:58.834 "iobuf_large_cache_size": 16 00:21:58.834 } 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "method": "bdev_raid_set_options", 00:21:58.834 "params": { 00:21:58.834 "process_window_size_kb": 1024, 00:21:58.834 "process_max_bandwidth_mb_sec": 0 00:21:58.834 } 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "method": "bdev_iscsi_set_options", 00:21:58.834 "params": { 00:21:58.834 "timeout_sec": 30 00:21:58.834 } 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "method": "bdev_nvme_set_options", 00:21:58.834 "params": { 00:21:58.834 "action_on_timeout": "none", 00:21:58.834 "timeout_us": 0, 00:21:58.834 "timeout_admin_us": 0, 00:21:58.834 "keep_alive_timeout_ms": 10000, 00:21:58.834 "arbitration_burst": 0, 00:21:58.834 "low_priority_weight": 0, 00:21:58.834 "medium_priority_weight": 0, 00:21:58.834 "high_priority_weight": 0, 00:21:58.834 "nvme_adminq_poll_period_us": 10000, 00:21:58.834 "nvme_ioq_poll_period_us": 0, 00:21:58.834 "io_queue_requests": 0, 00:21:58.834 "delay_cmd_submit": true, 00:21:58.834 "transport_retry_count": 4, 00:21:58.834 "bdev_retry_count": 3, 00:21:58.834 "transport_ack_timeout": 0, 00:21:58.834 "ctrlr_loss_timeout_sec": 0, 00:21:58.834 "reconnect_delay_sec": 0, 00:21:58.834 "fast_io_fail_timeout_sec": 0, 00:21:58.834 "disable_auto_failback": false, 00:21:58.834 "generate_uuids": false, 00:21:58.834 "transport_tos": 0, 00:21:58.834 "nvme_error_stat": false, 00:21:58.834 "rdma_srq_size": 0, 00:21:58.834 "io_path_stat": false, 00:21:58.834 "allow_accel_sequence": false, 00:21:58.834 "rdma_max_cq_size": 0, 00:21:58.834 "rdma_cm_event_timeout_ms": 0, 00:21:58.834 "dhchap_digests": [ 00:21:58.834 "sha256", 00:21:58.834 "sha384", 00:21:58.834 "sha512" 00:21:58.834 ], 00:21:58.834 "dhchap_dhgroups": [ 00:21:58.834 "null", 00:21:58.834 "ffdhe2048", 00:21:58.834 "ffdhe3072", 00:21:58.834 "ffdhe4096", 00:21:58.834 "ffdhe6144", 00:21:58.834 "ffdhe8192" 00:21:58.834 ] 00:21:58.834 } 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "method": "bdev_nvme_set_hotplug", 00:21:58.834 "params": { 00:21:58.834 "period_us": 100000, 00:21:58.834 "enable": false 00:21:58.834 } 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "method": "bdev_malloc_create", 00:21:58.834 "params": { 00:21:58.834 "name": "malloc0", 00:21:58.834 "num_blocks": 8192, 00:21:58.834 "block_size": 4096, 00:21:58.834 "physical_block_size": 4096, 00:21:58.834 "uuid": "c68ec500-82ca-4de0-8c57-2afca4ff34cb", 00:21:58.834 "optimal_io_boundary": 0, 00:21:58.834 "md_size": 0, 00:21:58.834 "dif_type": 0, 00:21:58.834 "dif_is_head_of_md": false, 00:21:58.834 "dif_pi_format": 0 00:21:58.834 } 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "method": "bdev_wait_for_examine" 00:21:58.834 } 00:21:58.834 ] 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "subsystem": "nbd", 00:21:58.834 "config": [] 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "subsystem": "scheduler", 00:21:58.834 "config": [ 00:21:58.834 { 00:21:58.834 "method": "framework_set_scheduler", 00:21:58.834 "params": { 00:21:58.834 "name": "static" 00:21:58.834 } 00:21:58.834 } 00:21:58.834 ] 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "subsystem": "nvmf", 00:21:58.834 "config": [ 00:21:58.834 { 00:21:58.834 "method": "nvmf_set_config", 00:21:58.834 "params": { 00:21:58.834 "discovery_filter": "match_any", 00:21:58.834 "admin_cmd_passthru": { 00:21:58.834 "identify_ctrlr": false 00:21:58.834 }, 00:21:58.834 "dhchap_digests": [ 00:21:58.834 "sha256", 00:21:58.834 "sha384", 00:21:58.834 "sha512" 00:21:58.834 ], 00:21:58.834 "dhchap_dhgroups": [ 00:21:58.834 "null", 00:21:58.834 "ffdhe2048", 00:21:58.834 "ffdhe3072", 00:21:58.834 "ffdhe4096", 00:21:58.834 "ffdhe6144", 00:21:58.834 "ffdhe8192" 00:21:58.834 ] 00:21:58.834 } 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "method": "nvmf_set_max_subsystems", 00:21:58.834 "params": { 00:21:58.834 "max_subsystems": 1024 00:21:58.834 } 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "method": "nvmf_set_crdt", 00:21:58.834 "params": { 00:21:58.834 "crdt1": 0, 00:21:58.834 "crdt2": 0, 00:21:58.834 "crdt3": 0 00:21:58.834 } 00:21:58.834 }, 00:21:58.834 { 00:21:58.834 "method": "nvmf_create_transport", 00:21:58.834 "params": { 00:21:58.834 "trtype": "TCP", 00:21:58.834 "max_queue_depth": 128, 00:21:58.834 "max_io_qpairs_per_ctrlr": 127, 00:21:58.834 "in_capsule_data_size": 4096, 00:21:58.834 "max_io_size": 131072, 00:21:58.834 "io_unit_size": 131072, 00:21:58.834 "max_aq_depth": 128, 00:21:58.834 "num_shared_buffers": 511, 00:21:58.834 "buf_cache_size": 4294967295, 00:21:58.834 "dif_insert_or_strip": false, 00:21:58.834 "zcopy": false, 00:21:58.834 "c2h_success": false, 00:21:58.834 "sock_priority": 0, 00:21:58.835 "abort_timeout_sec": 1, 00:21:58.835 "ack_timeout": 0, 00:21:58.835 "data_wr_pool_size": 0 00:21:58.835 } 00:21:58.835 }, 00:21:58.835 { 00:21:58.835 "method": "nvmf_create_subsystem", 00:21:58.835 "params": { 00:21:58.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.835 "allow_any_host": false, 00:21:58.835 "serial_number": "00000000000000000000", 00:21:58.835 "model_number": "SPDK bdev Controller", 00:21:58.835 "max_namespaces": 32, 00:21:58.835 "min_cntlid": 1, 00:21:58.835 "max_cntlid": 65519, 00:21:58.835 "ana_reporting": false 00:21:58.835 } 00:21:58.835 }, 00:21:58.835 { 00:21:58.835 "method": "nvmf_subsystem_add_host", 00:21:58.835 "params": { 00:21:58.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.835 "host": "nqn.2016-06.io.spdk:host1", 00:21:58.835 "psk": "key0" 00:21:58.835 } 00:21:58.835 }, 00:21:58.835 { 00:21:58.835 "method": "nvmf_subsystem_add_ns", 00:21:58.835 "params": { 00:21:58.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.835 "namespace": { 00:21:58.835 "nsid": 1, 00:21:58.835 "bdev_name": "malloc0", 00:21:58.835 "nguid": "C68EC50082CA4DE08C572AFCA4FF34CB", 00:21:58.835 "uuid": "c68ec500-82ca-4de0-8c57-2afca4ff34cb", 00:21:58.835 "no_auto_visible": false 00:21:58.835 } 00:21:58.835 } 00:21:58.835 }, 00:21:58.835 { 00:21:58.835 "method": "nvmf_subsystem_add_listener", 00:21:58.835 "params": { 00:21:58.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.835 "listen_address": { 00:21:58.835 "trtype": "TCP", 00:21:58.835 "adrfam": "IPv4", 00:21:58.835 "traddr": "10.0.0.2", 00:21:58.835 "trsvcid": "4420" 00:21:58.835 }, 00:21:58.835 "secure_channel": false, 00:21:58.835 "sock_impl": "ssl" 00:21:58.835 } 00:21:58.835 } 00:21:58.835 ] 00:21:58.835 } 00:21:58.835 ] 00:21:58.835 }' 00:21:58.835 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:59.094 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:59.094 "subsystems": [ 00:21:59.094 { 00:21:59.094 "subsystem": "keyring", 00:21:59.094 "config": [ 00:21:59.094 { 00:21:59.094 "method": "keyring_file_add_key", 00:21:59.094 "params": { 00:21:59.095 "name": "key0", 00:21:59.095 "path": "/tmp/tmp.fPResapnIR" 00:21:59.095 } 00:21:59.095 } 00:21:59.095 ] 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "subsystem": "iobuf", 00:21:59.095 "config": [ 00:21:59.095 { 00:21:59.095 "method": "iobuf_set_options", 00:21:59.095 "params": { 00:21:59.095 "small_pool_count": 8192, 00:21:59.095 "large_pool_count": 1024, 00:21:59.095 "small_bufsize": 8192, 00:21:59.095 "large_bufsize": 135168, 00:21:59.095 "enable_numa": false 00:21:59.095 } 00:21:59.095 } 00:21:59.095 ] 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "subsystem": "sock", 00:21:59.095 "config": [ 00:21:59.095 { 00:21:59.095 "method": "sock_set_default_impl", 00:21:59.095 "params": { 00:21:59.095 "impl_name": "posix" 00:21:59.095 } 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "method": "sock_impl_set_options", 00:21:59.095 "params": { 00:21:59.095 "impl_name": "ssl", 00:21:59.095 "recv_buf_size": 4096, 00:21:59.095 "send_buf_size": 4096, 00:21:59.095 "enable_recv_pipe": true, 00:21:59.095 "enable_quickack": false, 00:21:59.095 "enable_placement_id": 0, 00:21:59.095 "enable_zerocopy_send_server": true, 00:21:59.095 "enable_zerocopy_send_client": false, 00:21:59.095 "zerocopy_threshold": 0, 00:21:59.095 "tls_version": 0, 00:21:59.095 "enable_ktls": false 00:21:59.095 } 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "method": "sock_impl_set_options", 00:21:59.095 "params": { 00:21:59.095 "impl_name": "posix", 00:21:59.095 "recv_buf_size": 2097152, 00:21:59.095 "send_buf_size": 2097152, 00:21:59.095 "enable_recv_pipe": true, 00:21:59.095 "enable_quickack": false, 00:21:59.095 "enable_placement_id": 0, 00:21:59.095 "enable_zerocopy_send_server": true, 00:21:59.095 "enable_zerocopy_send_client": false, 00:21:59.095 "zerocopy_threshold": 0, 00:21:59.095 "tls_version": 0, 00:21:59.095 "enable_ktls": false 00:21:59.095 } 00:21:59.095 } 00:21:59.095 ] 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "subsystem": "vmd", 00:21:59.095 "config": [] 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "subsystem": "accel", 00:21:59.095 "config": [ 00:21:59.095 { 00:21:59.095 "method": "accel_set_options", 00:21:59.095 "params": { 00:21:59.095 "small_cache_size": 128, 00:21:59.095 "large_cache_size": 16, 00:21:59.095 "task_count": 2048, 00:21:59.095 "sequence_count": 2048, 00:21:59.095 "buf_count": 2048 00:21:59.095 } 00:21:59.095 } 00:21:59.095 ] 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "subsystem": "bdev", 00:21:59.095 "config": [ 00:21:59.095 { 00:21:59.095 "method": "bdev_set_options", 00:21:59.095 "params": { 00:21:59.095 "bdev_io_pool_size": 65535, 00:21:59.095 "bdev_io_cache_size": 256, 00:21:59.095 "bdev_auto_examine": true, 00:21:59.095 "iobuf_small_cache_size": 128, 00:21:59.095 "iobuf_large_cache_size": 16 00:21:59.095 } 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "method": "bdev_raid_set_options", 00:21:59.095 "params": { 00:21:59.095 "process_window_size_kb": 1024, 00:21:59.095 "process_max_bandwidth_mb_sec": 0 00:21:59.095 } 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "method": "bdev_iscsi_set_options", 00:21:59.095 "params": { 00:21:59.095 "timeout_sec": 30 00:21:59.095 } 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "method": "bdev_nvme_set_options", 00:21:59.095 "params": { 00:21:59.095 "action_on_timeout": "none", 00:21:59.095 "timeout_us": 0, 00:21:59.095 "timeout_admin_us": 0, 00:21:59.095 "keep_alive_timeout_ms": 10000, 00:21:59.095 "arbitration_burst": 0, 00:21:59.095 "low_priority_weight": 0, 00:21:59.095 "medium_priority_weight": 0, 00:21:59.095 "high_priority_weight": 0, 00:21:59.095 "nvme_adminq_poll_period_us": 10000, 00:21:59.095 "nvme_ioq_poll_period_us": 0, 00:21:59.095 "io_queue_requests": 512, 00:21:59.095 "delay_cmd_submit": true, 00:21:59.095 "transport_retry_count": 4, 00:21:59.095 "bdev_retry_count": 3, 00:21:59.095 "transport_ack_timeout": 0, 00:21:59.095 "ctrlr_loss_timeout_sec": 0, 00:21:59.095 "reconnect_delay_sec": 0, 00:21:59.095 "fast_io_fail_timeout_sec": 0, 00:21:59.095 "disable_auto_failback": false, 00:21:59.095 "generate_uuids": false, 00:21:59.095 "transport_tos": 0, 00:21:59.095 "nvme_error_stat": false, 00:21:59.095 "rdma_srq_size": 0, 00:21:59.095 "io_path_stat": false, 00:21:59.095 "allow_accel_sequence": false, 00:21:59.095 "rdma_max_cq_size": 0, 00:21:59.095 "rdma_cm_event_timeout_ms": 0, 00:21:59.095 "dhchap_digests": [ 00:21:59.095 "sha256", 00:21:59.095 "sha384", 00:21:59.095 "sha512" 00:21:59.095 ], 00:21:59.095 "dhchap_dhgroups": [ 00:21:59.095 "null", 00:21:59.095 "ffdhe2048", 00:21:59.095 "ffdhe3072", 00:21:59.095 "ffdhe4096", 00:21:59.095 "ffdhe6144", 00:21:59.095 "ffdhe8192" 00:21:59.095 ] 00:21:59.095 } 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "method": "bdev_nvme_attach_controller", 00:21:59.095 "params": { 00:21:59.095 "name": "nvme0", 00:21:59.095 "trtype": "TCP", 00:21:59.095 "adrfam": "IPv4", 00:21:59.095 "traddr": "10.0.0.2", 00:21:59.095 "trsvcid": "4420", 00:21:59.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.095 "prchk_reftag": false, 00:21:59.095 "prchk_guard": false, 00:21:59.095 "ctrlr_loss_timeout_sec": 0, 00:21:59.095 "reconnect_delay_sec": 0, 00:21:59.095 "fast_io_fail_timeout_sec": 0, 00:21:59.095 "psk": "key0", 00:21:59.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.095 "hdgst": false, 00:21:59.095 "ddgst": false, 00:21:59.095 "multipath": "multipath" 00:21:59.095 } 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "method": "bdev_nvme_set_hotplug", 00:21:59.095 "params": { 00:21:59.095 "period_us": 100000, 00:21:59.095 "enable": false 00:21:59.095 } 00:21:59.095 }, 00:21:59.095 { 00:21:59.095 "method": "bdev_enable_histogram", 00:21:59.095 "params": { 00:21:59.095 "name": "nvme0n1", 00:21:59.095 "enable": true 00:21:59.095 } 00:21:59.095 }, 00:21:59.096 { 00:21:59.096 "method": "bdev_wait_for_examine" 00:21:59.096 } 00:21:59.096 ] 00:21:59.096 }, 00:21:59.096 { 00:21:59.096 "subsystem": "nbd", 00:21:59.096 "config": [] 00:21:59.096 } 00:21:59.096 ] 00:21:59.096 }' 00:21:59.096 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3953919 00:21:59.096 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3953919 ']' 00:21:59.096 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3953919 00:21:59.096 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:59.096 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.096 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3953919 00:21:59.096 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:59.096 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:59.096 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3953919' 00:21:59.096 killing process with pid 3953919 00:21:59.096 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3953919 00:21:59.096 Received shutdown signal, test time was about 1.000000 seconds 00:21:59.096 00:21:59.096 Latency(us) 00:21:59.096 [2024-11-19T09:49:48.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.096 [2024-11-19T09:49:48.888Z] =================================================================================================================== 00:21:59.096 [2024-11-19T09:49:48.888Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.096 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3953919 00:21:59.354 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3953673 00:21:59.354 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3953673 ']' 00:21:59.354 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3953673 00:21:59.354 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:59.354 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.354 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3953673 00:21:59.354 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.354 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.354 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3953673' 00:21:59.354 killing process with pid 3953673 00:21:59.354 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3953673 00:21:59.354 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3953673 00:21:59.612 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:59.613 "subsystems": [ 00:21:59.613 { 00:21:59.613 "subsystem": "keyring", 00:21:59.613 "config": [ 00:21:59.613 { 00:21:59.613 "method": "keyring_file_add_key", 00:21:59.613 "params": { 00:21:59.613 "name": "key0", 00:21:59.613 "path": "/tmp/tmp.fPResapnIR" 00:21:59.613 } 00:21:59.613 } 00:21:59.613 ] 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "subsystem": "iobuf", 00:21:59.613 "config": [ 00:21:59.613 { 00:21:59.613 "method": "iobuf_set_options", 00:21:59.613 "params": { 00:21:59.613 "small_pool_count": 8192, 00:21:59.613 "large_pool_count": 1024, 00:21:59.613 "small_bufsize": 8192, 00:21:59.613 "large_bufsize": 135168, 00:21:59.613 "enable_numa": false 00:21:59.613 } 00:21:59.613 } 00:21:59.613 ] 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "subsystem": "sock", 00:21:59.613 "config": [ 00:21:59.613 { 00:21:59.613 "method": "sock_set_default_impl", 00:21:59.613 "params": { 00:21:59.613 "impl_name": "posix" 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "sock_impl_set_options", 00:21:59.613 "params": { 00:21:59.613 "impl_name": "ssl", 00:21:59.613 "recv_buf_size": 4096, 00:21:59.613 "send_buf_size": 4096, 00:21:59.613 "enable_recv_pipe": true, 00:21:59.613 "enable_quickack": false, 00:21:59.613 "enable_placement_id": 0, 00:21:59.613 "enable_zerocopy_send_server": true, 00:21:59.613 "enable_zerocopy_send_client": false, 00:21:59.613 "zerocopy_threshold": 0, 00:21:59.613 "tls_version": 0, 00:21:59.613 "enable_ktls": false 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "sock_impl_set_options", 00:21:59.613 "params": { 00:21:59.613 "impl_name": "posix", 00:21:59.613 "recv_buf_size": 2097152, 00:21:59.613 "send_buf_size": 2097152, 00:21:59.613 "enable_recv_pipe": true, 00:21:59.613 "enable_quickack": false, 00:21:59.613 "enable_placement_id": 0, 00:21:59.613 "enable_zerocopy_send_server": true, 00:21:59.613 "enable_zerocopy_send_client": false, 00:21:59.613 "zerocopy_threshold": 0, 00:21:59.613 "tls_version": 0, 00:21:59.613 "enable_ktls": false 00:21:59.613 } 00:21:59.613 } 00:21:59.613 ] 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "subsystem": "vmd", 00:21:59.613 "config": [] 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "subsystem": "accel", 00:21:59.613 "config": [ 00:21:59.613 { 00:21:59.613 "method": "accel_set_options", 00:21:59.613 "params": { 00:21:59.613 "small_cache_size": 128, 00:21:59.613 "large_cache_size": 16, 00:21:59.613 "task_count": 2048, 00:21:59.613 "sequence_count": 2048, 00:21:59.613 "buf_count": 2048 00:21:59.613 } 00:21:59.613 } 00:21:59.613 ] 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "subsystem": "bdev", 00:21:59.613 "config": [ 00:21:59.613 { 00:21:59.613 "method": "bdev_set_options", 00:21:59.613 "params": { 00:21:59.613 "bdev_io_pool_size": 65535, 00:21:59.613 "bdev_io_cache_size": 256, 00:21:59.613 "bdev_auto_examine": true, 00:21:59.613 "iobuf_small_cache_size": 128, 00:21:59.613 "iobuf_large_cache_size": 16 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "bdev_raid_set_options", 00:21:59.613 "params": { 00:21:59.613 "process_window_size_kb": 1024, 00:21:59.613 "process_max_bandwidth_mb_sec": 0 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "bdev_iscsi_set_options", 00:21:59.613 "params": { 00:21:59.613 "timeout_sec": 30 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "bdev_nvme_set_options", 00:21:59.613 "params": { 00:21:59.613 "action_on_timeout": "none", 00:21:59.613 "timeout_us": 0, 00:21:59.613 "timeout_admin_us": 0, 00:21:59.613 "keep_alive_timeout_ms": 10000, 00:21:59.613 "arbitration_burst": 0, 00:21:59.613 "low_priority_weight": 0, 00:21:59.613 "medium_priority_weight": 0, 00:21:59.613 "high_priority_weight": 0, 00:21:59.613 "nvme_adminq_poll_period_us": 10000, 00:21:59.613 "nvme_ioq_poll_period_us": 0, 00:21:59.613 "io_queue_requests": 0, 00:21:59.613 "delay_cmd_submit": true, 00:21:59.613 "transport_retry_count": 4, 00:21:59.613 "bdev_retry_count": 3, 00:21:59.613 "transport_ack_timeout": 0, 00:21:59.613 "ctrlr_loss_timeout_sec": 0, 00:21:59.613 "reconnect_delay_sec": 0, 00:21:59.613 "fast_io_fail_timeout_sec": 0, 00:21:59.613 "disable_auto_failback": false, 00:21:59.613 "generate_uuids": false, 00:21:59.613 "transport_tos": 0, 00:21:59.613 "nvme_error_stat": false, 00:21:59.613 "rdma_srq_size": 0, 00:21:59.613 "io_path_stat": false, 00:21:59.613 "allow_accel_sequence": false, 00:21:59.613 "rdma_max_cq_size": 0, 00:21:59.613 "rdma_cm_event_timeout_ms": 0, 00:21:59.613 "dhchap_digests": [ 00:21:59.613 "sha256", 00:21:59.613 "sha384", 00:21:59.613 "sha512" 00:21:59.613 ], 00:21:59.613 "dhchap_dhgroups": [ 00:21:59.613 "null", 00:21:59.613 "ffdhe2048", 00:21:59.613 "ffdhe3072", 00:21:59.613 "ffdhe4096", 00:21:59.613 "ffdhe6144", 00:21:59.613 "ffdhe8192" 00:21:59.613 ] 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "bdev_nvme_set_hotplug", 00:21:59.613 "params": { 00:21:59.613 "period_us": 100000, 00:21:59.613 "enable": false 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "bdev_malloc_create", 00:21:59.613 "params": { 00:21:59.613 "name": "malloc0", 00:21:59.613 "num_blocks": 8192, 00:21:59.613 "block_size": 4096, 00:21:59.613 "physical_block_size": 4096, 00:21:59.613 "uuid": "c68ec500-82ca-4de0-8c57-2afca4ff34cb", 00:21:59.613 "optimal_io_boundary": 0, 00:21:59.613 "md_size": 0, 00:21:59.613 "dif_type": 0, 00:21:59.613 "dif_is_head_of_md": false, 00:21:59.613 "dif_pi_format": 0 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "bdev_wait_for_examine" 00:21:59.613 } 00:21:59.613 ] 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "subsystem": "nbd", 00:21:59.613 "config": [] 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "subsystem": "scheduler", 00:21:59.613 "config": [ 00:21:59.613 { 00:21:59.613 "method": "framework_set_scheduler", 00:21:59.613 "params": { 00:21:59.613 "name": "static" 00:21:59.613 } 00:21:59.613 } 00:21:59.613 ] 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "subsystem": "nvmf", 00:21:59.613 "config": [ 00:21:59.613 { 00:21:59.613 "method": "nvmf_set_config", 00:21:59.613 "params": { 00:21:59.613 "discovery_filter": "match_any", 00:21:59.613 "admin_cmd_passthru": { 00:21:59.613 "identify_ctrlr": false 00:21:59.613 }, 00:21:59.613 "dhchap_digests": [ 00:21:59.613 "sha256", 00:21:59.613 "sha384", 00:21:59.613 "sha512" 00:21:59.613 ], 00:21:59.613 "dhchap_dhgroups": [ 00:21:59.613 "null", 00:21:59.613 "ffdhe2048", 00:21:59.613 "ffdhe3072", 00:21:59.613 "ffdhe4096", 00:21:59.613 "ffdhe6144", 00:21:59.613 "ffdhe8192" 00:21:59.613 ] 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "nvmf_set_max_subsystems", 00:21:59.613 "params": { 00:21:59.613 "max_subsystems": 1024 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "nvmf_set_crdt", 00:21:59.613 "params": { 00:21:59.613 "crdt1": 0, 00:21:59.613 "crdt2": 0, 00:21:59.613 "crdt3": 0 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "nvmf_create_transport", 00:21:59.613 "params": { 00:21:59.613 "trtype": "TCP", 00:21:59.613 "max_queue_depth": 128, 00:21:59.613 "max_io_qpairs_per_ctrlr": 127, 00:21:59.613 "in_capsule_data_size": 4096, 00:21:59.613 "max_io_size": 131072, 00:21:59.613 "io_unit_size": 131072, 00:21:59.613 "max_aq_depth": 128, 00:21:59.613 "num_shared_buffers": 511, 00:21:59.613 "buf_cache_size": 4294967295, 00:21:59.613 "dif_insert_or_strip": false, 00:21:59.613 "zcopy": false, 00:21:59.613 "c2h_success": false, 00:21:59.613 "sock_priority": 0, 00:21:59.613 "abort_timeout_sec": 1, 00:21:59.613 "ack_timeout": 0, 00:21:59.613 "data_wr_pool_size": 0 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "nvmf_create_subsystem", 00:21:59.613 "params": { 00:21:59.613 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.613 "allow_any_host": false, 00:21:59.613 "serial_number": "00000000000000000000", 00:21:59.613 "model_number": "SPDK bdev Controller", 00:21:59.613 "max_namespaces": 32, 00:21:59.613 "min_cntlid": 1, 00:21:59.613 "max_cntlid": 65519, 00:21:59.613 "ana_reporting": false 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "nvmf_subsystem_add_host", 00:21:59.613 "params": { 00:21:59.613 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.613 "host": "nqn.2016-06.io.spdk:host1", 00:21:59.613 "psk": "key0" 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "nvmf_subsystem_add_ns", 00:21:59.613 "params": { 00:21:59.613 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.613 "namespace": { 00:21:59.613 "nsid": 1, 00:21:59.613 "bdev_name": "malloc0", 00:21:59.613 "nguid": "C68EC50082CA4DE08C572AFCA4FF34CB", 00:21:59.613 "uuid": "c68ec500-82ca-4de0-8c57-2afca4ff34cb", 00:21:59.613 "no_auto_visible": false 00:21:59.613 } 00:21:59.613 } 00:21:59.613 }, 00:21:59.613 { 00:21:59.613 "method": "nvmf_subsystem_add_listener", 00:21:59.613 "params": { 00:21:59.613 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.613 "listen_address": { 00:21:59.613 "trtype": "TCP", 00:21:59.613 "adrfam": "IPv4", 00:21:59.613 "traddr": "10.0.0.2", 00:21:59.613 "trsvcid": "4420" 00:21:59.613 }, 00:21:59.613 "secure_channel": false, 00:21:59.613 "sock_impl": "ssl" 00:21:59.613 } 00:21:59.613 } 00:21:59.613 ] 00:21:59.613 } 00:21:59.613 ] 00:21:59.613 }' 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3954302 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3954302 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3954302 ']' 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.613 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.613 [2024-11-19 10:49:49.205671] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:59.613 [2024-11-19 10:49:49.205718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.613 [2024-11-19 10:49:49.282436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.613 [2024-11-19 10:49:49.322731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.613 [2024-11-19 10:49:49.322766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.613 [2024-11-19 10:49:49.322773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.613 [2024-11-19 10:49:49.322779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.613 [2024-11-19 10:49:49.322783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.613 [2024-11-19 10:49:49.323404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.872 [2024-11-19 10:49:49.535976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.872 [2024-11-19 10:49:49.568011] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:59.872 [2024-11-19 10:49:49.568210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.438 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.438 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:00.438 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.438 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.438 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.438 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.438 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3954423 00:22:00.438 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3954423 /var/tmp/bdevperf.sock 00:22:00.438 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3954423 ']' 00:22:00.438 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.438 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:00.439 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.439 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.439 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:00.439 "subsystems": [ 00:22:00.439 { 00:22:00.439 "subsystem": "keyring", 00:22:00.439 "config": [ 00:22:00.439 { 00:22:00.439 "method": "keyring_file_add_key", 00:22:00.439 "params": { 00:22:00.439 "name": "key0", 00:22:00.439 "path": "/tmp/tmp.fPResapnIR" 00:22:00.439 } 00:22:00.439 } 00:22:00.439 ] 00:22:00.439 }, 00:22:00.439 { 00:22:00.439 "subsystem": "iobuf", 00:22:00.439 "config": [ 00:22:00.439 { 00:22:00.439 "method": "iobuf_set_options", 00:22:00.439 "params": { 00:22:00.439 "small_pool_count": 8192, 00:22:00.439 "large_pool_count": 1024, 00:22:00.439 "small_bufsize": 8192, 00:22:00.439 "large_bufsize": 135168, 00:22:00.439 "enable_numa": false 00:22:00.439 } 00:22:00.439 } 00:22:00.439 ] 00:22:00.439 }, 00:22:00.439 { 00:22:00.439 "subsystem": "sock", 00:22:00.439 "config": [ 00:22:00.439 { 00:22:00.439 "method": "sock_set_default_impl", 00:22:00.439 "params": { 00:22:00.439 "impl_name": "posix" 00:22:00.439 } 00:22:00.439 }, 00:22:00.439 { 00:22:00.439 "method": "sock_impl_set_options", 00:22:00.439 "params": { 00:22:00.439 "impl_name": "ssl", 00:22:00.439 "recv_buf_size": 4096, 00:22:00.439 "send_buf_size": 4096, 00:22:00.439 "enable_recv_pipe": true, 00:22:00.439 "enable_quickack": false, 00:22:00.439 "enable_placement_id": 0, 00:22:00.439 "enable_zerocopy_send_server": true, 00:22:00.439 "enable_zerocopy_send_client": false, 00:22:00.439 "zerocopy_threshold": 0, 00:22:00.439 "tls_version": 0, 00:22:00.439 "enable_ktls": false 00:22:00.439 } 00:22:00.439 }, 00:22:00.439 { 00:22:00.439 "method": "sock_impl_set_options", 00:22:00.439 "params": { 00:22:00.439 "impl_name": "posix", 00:22:00.439 "recv_buf_size": 2097152, 00:22:00.439 "send_buf_size": 2097152, 00:22:00.439 "enable_recv_pipe": true, 00:22:00.439 "enable_quickack": false, 00:22:00.439 "enable_placement_id": 0, 00:22:00.439 "enable_zerocopy_send_server": true, 00:22:00.439 "enable_zerocopy_send_client": false, 00:22:00.439 "zerocopy_threshold": 0, 00:22:00.439 "tls_version": 0, 00:22:00.439 "enable_ktls": false 00:22:00.439 } 00:22:00.439 } 00:22:00.439 ] 00:22:00.439 }, 00:22:00.439 { 00:22:00.439 "subsystem": "vmd", 00:22:00.439 "config": [] 00:22:00.439 }, 00:22:00.439 { 00:22:00.439 "subsystem": "accel", 00:22:00.439 "config": [ 00:22:00.439 { 00:22:00.439 "method": "accel_set_options", 00:22:00.439 "params": { 00:22:00.439 "small_cache_size": 128, 00:22:00.439 "large_cache_size": 16, 00:22:00.439 "task_count": 2048, 00:22:00.439 "sequence_count": 2048, 00:22:00.439 "buf_count": 2048 00:22:00.439 } 00:22:00.439 } 00:22:00.439 ] 00:22:00.439 }, 00:22:00.439 { 00:22:00.439 "subsystem": "bdev", 00:22:00.439 "config": [ 00:22:00.439 { 00:22:00.439 "method": "bdev_set_options", 00:22:00.439 "params": { 00:22:00.439 "bdev_io_pool_size": 65535, 00:22:00.439 "bdev_io_cache_size": 256, 00:22:00.439 "bdev_auto_examine": true, 00:22:00.439 "iobuf_small_cache_size": 128, 00:22:00.439 "iobuf_large_cache_size": 16 00:22:00.439 } 00:22:00.439 }, 00:22:00.439 { 00:22:00.439 "method": "bdev_raid_set_options", 00:22:00.439 "params": { 00:22:00.439 "process_window_size_kb": 1024, 00:22:00.439 "process_max_bandwidth_mb_sec": 0 00:22:00.439 } 00:22:00.439 }, 00:22:00.439 { 00:22:00.439 "method": "bdev_iscsi_set_options", 00:22:00.439 "params": { 00:22:00.439 "timeout_sec": 30 00:22:00.439 } 00:22:00.439 }, 00:22:00.439 { 00:22:00.439 "method": "bdev_nvme_set_options", 00:22:00.439 "params": { 00:22:00.439 "action_on_timeout": "none", 00:22:00.439 "timeout_us": 0, 00:22:00.439 "timeout_admin_us": 0, 00:22:00.439 "keep_alive_timeout_ms": 10000, 00:22:00.439 "arbitration_burst": 0, 00:22:00.439 "low_priority_weight": 0, 00:22:00.439 "medium_priority_weight": 0, 00:22:00.439 "high_priority_weight": 0, 00:22:00.439 "nvme_adminq_poll_period_us": 10000, 00:22:00.439 "nvme_ioq_poll_period_us": 0, 00:22:00.439 "io_queue_requests": 512, 00:22:00.439 "delay_cmd_submit": true, 00:22:00.439 "transport_retry_count": 4, 00:22:00.439 "bdev_retry_count": 3, 00:22:00.439 "transport_ack_timeout": 0, 00:22:00.439 "ctrlr_loss_timeout_sec": 0, 00:22:00.439 "reconnect_delay_sec": 0, 00:22:00.439 "fast_io_fail_timeout_sec": 0, 00:22:00.439 "disable_auto_failback": false, 00:22:00.439 "generate_uuids": false, 00:22:00.439 "transport_tos": 0, 00:22:00.439 "nvme_error_stat": false, 00:22:00.439 "rdma_srq_size": 0, 00:22:00.439 "io_path_stat": false, 00:22:00.439 "allow_accel_sequence": false, 00:22:00.439 "rdma_max_cq_size": 0, 00:22:00.439 "rdma_cm_event_timeout_ms": 0, 00:22:00.439 "dhchap_digests": [ 00:22:00.439 "sha256", 00:22:00.439 "sha384", 00:22:00.439 "sha512" 00:22:00.439 ], 00:22:00.439 "dhchap_dhgroups": [ 00:22:00.439 "null", 00:22:00.439 "ffdhe2048", 00:22:00.439 "ffdhe3072", 00:22:00.439 "ffdhe4096", 00:22:00.439 "ffdhe6144", 00:22:00.439 "ffdhe8192" 00:22:00.439 ] 00:22:00.439 } 00:22:00.439 }, 00:22:00.439 { 00:22:00.439 "method": "bdev_nvme_attach_controller", 00:22:00.439 "params": { 00:22:00.439 "name": "nvme0", 00:22:00.439 "trtype": "TCP", 00:22:00.439 "adrfam": "IPv4", 00:22:00.439 "traddr": "10.0.0.2", 00:22:00.439 "trsvcid": "4420", 00:22:00.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.439 "prchk_reftag": false, 00:22:00.439 "prchk_guard": false, 00:22:00.439 "ctrlr_loss_timeout_sec": 0, 00:22:00.439 "reconnect_delay_sec": 0, 00:22:00.439 "fast_io_fail_timeout_sec": 0, 00:22:00.440 "psk": "key0", 00:22:00.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.440 "hdgst": false, 00:22:00.440 "ddgst": false, 00:22:00.440 "multipath": "multipath" 00:22:00.440 } 00:22:00.440 }, 00:22:00.440 { 00:22:00.440 "method": "bdev_nvme_set_hotplug", 00:22:00.440 "params": { 00:22:00.440 "period_us": 100000, 00:22:00.440 "enable": false 00:22:00.440 } 00:22:00.440 }, 00:22:00.440 { 00:22:00.440 "method": "bdev_enable_histogram", 00:22:00.440 "params": { 00:22:00.440 "name": "nvme0n1", 00:22:00.440 "enable": true 00:22:00.440 } 00:22:00.440 }, 00:22:00.440 { 00:22:00.440 "method": "bdev_wait_for_examine" 00:22:00.440 } 00:22:00.440 ] 00:22:00.440 }, 00:22:00.440 { 00:22:00.440 "subsystem": "nbd", 00:22:00.440 "config": [] 00:22:00.440 } 00:22:00.440 ] 00:22:00.440 }' 00:22:00.440 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.440 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.440 [2024-11-19 10:49:50.110885] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:00.440 [2024-11-19 10:49:50.110936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954423 ] 00:22:00.440 [2024-11-19 10:49:50.184344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.440 [2024-11-19 10:49:50.224829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.699 [2024-11-19 10:49:50.376946] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.265 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.265 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:01.265 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.265 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:01.524 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.524 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.524 Running I/O for 1 seconds... 00:22:02.718 5449.00 IOPS, 21.29 MiB/s 00:22:02.718 Latency(us) 00:22:02.718 [2024-11-19T09:49:52.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.718 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:02.718 Verification LBA range: start 0x0 length 0x2000 00:22:02.718 nvme0n1 : 1.02 5475.57 21.39 0.00 0.00 23174.29 5118.05 21470.84 00:22:02.718 [2024-11-19T09:49:52.510Z] =================================================================================================================== 00:22:02.718 [2024-11-19T09:49:52.510Z] Total : 5475.57 21.39 0.00 0.00 23174.29 5118.05 21470.84 00:22:02.718 { 00:22:02.718 "results": [ 00:22:02.718 { 00:22:02.718 "job": "nvme0n1", 00:22:02.718 "core_mask": "0x2", 00:22:02.718 "workload": "verify", 00:22:02.718 "status": "finished", 00:22:02.718 "verify_range": { 00:22:02.718 "start": 0, 00:22:02.718 "length": 8192 00:22:02.718 }, 00:22:02.718 "queue_depth": 128, 00:22:02.718 "io_size": 4096, 00:22:02.718 "runtime": 1.018525, 00:22:02.718 "iops": 5475.565155494465, 00:22:02.718 "mibps": 21.388926388650255, 00:22:02.718 "io_failed": 0, 00:22:02.718 "io_timeout": 0, 00:22:02.718 "avg_latency_us": 23174.287155579463, 00:22:02.718 "min_latency_us": 5118.049523809524, 00:22:02.718 "max_latency_us": 21470.841904761906 00:22:02.718 } 00:22:02.718 ], 00:22:02.718 "core_count": 1 00:22:02.718 } 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:02.718 nvmf_trace.0 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3954423 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3954423 ']' 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3954423 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3954423 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3954423' 00:22:02.718 killing process with pid 3954423 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3954423 00:22:02.718 Received shutdown signal, test time was about 1.000000 seconds 00:22:02.718 00:22:02.718 Latency(us) 00:22:02.718 [2024-11-19T09:49:52.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.718 [2024-11-19T09:49:52.510Z] =================================================================================================================== 00:22:02.718 [2024-11-19T09:49:52.510Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.718 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3954423 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:02.977 rmmod nvme_tcp 00:22:02.977 rmmod nvme_fabrics 00:22:02.977 rmmod nvme_keyring 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3954302 ']' 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3954302 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3954302 ']' 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3954302 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3954302 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3954302' 00:22:02.977 killing process with pid 3954302 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3954302 00:22:02.977 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3954302 00:22:03.236 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:03.237 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:03.237 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:03.237 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:03.237 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:03.237 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:03.237 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:03.237 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:03.237 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:03.237 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.237 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.237 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.143 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:05.403 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aQx3s3O2d2 /tmp/tmp.iQTzkTNb4z /tmp/tmp.fPResapnIR 00:22:05.403 00:22:05.403 real 1m21.329s 00:22:05.403 user 2m4.171s 00:22:05.403 sys 0m30.371s 00:22:05.403 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.403 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.403 ************************************ 00:22:05.403 END TEST nvmf_tls 00:22:05.403 ************************************ 00:22:05.403 10:49:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:05.403 10:49:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:05.403 10:49:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.403 10:49:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:05.403 ************************************ 00:22:05.403 START TEST nvmf_fips 00:22:05.403 ************************************ 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:05.403 * Looking for test storage... 00:22:05.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:05.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.403 --rc genhtml_branch_coverage=1 00:22:05.403 --rc genhtml_function_coverage=1 00:22:05.403 --rc genhtml_legend=1 00:22:05.403 --rc geninfo_all_blocks=1 00:22:05.403 --rc geninfo_unexecuted_blocks=1 00:22:05.403 00:22:05.403 ' 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:05.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.403 --rc genhtml_branch_coverage=1 00:22:05.403 --rc genhtml_function_coverage=1 00:22:05.403 --rc genhtml_legend=1 00:22:05.403 --rc geninfo_all_blocks=1 00:22:05.403 --rc geninfo_unexecuted_blocks=1 00:22:05.403 00:22:05.403 ' 00:22:05.403 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:05.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.404 --rc genhtml_branch_coverage=1 00:22:05.404 --rc genhtml_function_coverage=1 00:22:05.404 --rc genhtml_legend=1 00:22:05.404 --rc geninfo_all_blocks=1 00:22:05.404 --rc geninfo_unexecuted_blocks=1 00:22:05.404 00:22:05.404 ' 00:22:05.404 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:05.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.404 --rc genhtml_branch_coverage=1 00:22:05.404 --rc genhtml_function_coverage=1 00:22:05.404 --rc genhtml_legend=1 00:22:05.404 --rc geninfo_all_blocks=1 00:22:05.404 --rc geninfo_unexecuted_blocks=1 00:22:05.404 00:22:05.404 ' 00:22:05.404 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.404 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:05.664 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:05.665 Error setting digest 00:22:05.665 40E2E522CB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:05.665 40E2E522CB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:05.665 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.239 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:12.239 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:12.240 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:12.240 Found net devices under 0000:86:00.0: cvl_0_0 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:12.240 Found net devices under 0000:86:00.1: cvl_0_1 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:22:12.240 00:22:12.240 --- 10.0.0.2 ping statistics --- 00:22:12.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.240 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:22:12.240 00:22:12.240 --- 10.0.0.1 ping statistics --- 00:22:12.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.240 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3958444 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3958444 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3958444 ']' 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.240 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.240 [2024-11-19 10:50:01.400969] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:12.240 [2024-11-19 10:50:01.401016] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.240 [2024-11-19 10:50:01.481426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.240 [2024-11-19 10:50:01.522836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.240 [2024-11-19 10:50:01.522872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.240 [2024-11-19 10:50:01.522879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.240 [2024-11-19 10:50:01.522885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.240 [2024-11-19 10:50:01.522890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.240 [2024-11-19 10:50:01.523478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.tmU 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.tmU 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.tmU 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.tmU 00:22:12.500 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:12.758 [2024-11-19 10:50:02.452409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.758 [2024-11-19 10:50:02.468424] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:12.758 [2024-11-19 10:50:02.468580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.758 malloc0 00:22:12.759 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:12.759 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3958697 00:22:12.759 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:12.759 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3958697 /var/tmp/bdevperf.sock 00:22:12.759 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3958697 ']' 00:22:12.759 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.759 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.759 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.759 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.759 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:13.018 [2024-11-19 10:50:02.597781] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:13.018 [2024-11-19 10:50:02.597836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958697 ] 00:22:13.018 [2024-11-19 10:50:02.673910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.018 [2024-11-19 10:50:02.714585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.955 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.955 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:13.955 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.tmU 00:22:13.955 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:14.214 [2024-11-19 10:50:03.806200] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.214 TLSTESTn1 00:22:14.214 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.214 Running I/O for 10 seconds... 00:22:16.542 5388.00 IOPS, 21.05 MiB/s [2024-11-19T09:50:07.270Z] 5500.50 IOPS, 21.49 MiB/s [2024-11-19T09:50:08.204Z] 5548.33 IOPS, 21.67 MiB/s [2024-11-19T09:50:09.139Z] 5474.25 IOPS, 21.38 MiB/s [2024-11-19T09:50:10.078Z] 5498.60 IOPS, 21.48 MiB/s [2024-11-19T09:50:11.015Z] 5517.00 IOPS, 21.55 MiB/s [2024-11-19T09:50:12.390Z] 5524.71 IOPS, 21.58 MiB/s [2024-11-19T09:50:13.325Z] 5513.12 IOPS, 21.54 MiB/s [2024-11-19T09:50:14.262Z] 5517.67 IOPS, 21.55 MiB/s [2024-11-19T09:50:14.262Z] 5508.80 IOPS, 21.52 MiB/s 00:22:24.470 Latency(us) 00:22:24.470 [2024-11-19T09:50:14.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.470 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:24.470 Verification LBA range: start 0x0 length 0x2000 00:22:24.470 TLSTESTn1 : 10.03 5502.56 21.49 0.00 0.00 23208.70 6179.11 33454.57 00:22:24.470 [2024-11-19T09:50:14.262Z] =================================================================================================================== 00:22:24.470 [2024-11-19T09:50:14.263Z] Total : 5502.56 21.49 0.00 0.00 23208.70 6179.11 33454.57 00:22:24.471 { 00:22:24.471 "results": [ 00:22:24.471 { 00:22:24.471 "job": "TLSTESTn1", 00:22:24.471 "core_mask": "0x4", 00:22:24.471 "workload": "verify", 00:22:24.471 "status": "finished", 00:22:24.471 "verify_range": { 00:22:24.471 "start": 0, 00:22:24.471 "length": 8192 00:22:24.471 }, 00:22:24.471 "queue_depth": 128, 00:22:24.471 "io_size": 4096, 00:22:24.471 "runtime": 10.034598, 00:22:24.471 "iops": 5502.562235178729, 00:22:24.471 "mibps": 21.49438373116691, 00:22:24.471 "io_failed": 0, 00:22:24.471 "io_timeout": 0, 00:22:24.471 "avg_latency_us": 23208.704038856922, 00:22:24.471 "min_latency_us": 6179.108571428572, 00:22:24.471 "max_latency_us": 33454.56761904762 00:22:24.471 } 00:22:24.471 ], 00:22:24.471 "core_count": 1 00:22:24.471 } 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:24.471 nvmf_trace.0 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3958697 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3958697 ']' 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3958697 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3958697 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3958697' 00:22:24.471 killing process with pid 3958697 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3958697 00:22:24.471 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.471 00:22:24.471 Latency(us) 00:22:24.471 [2024-11-19T09:50:14.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.471 [2024-11-19T09:50:14.263Z] =================================================================================================================== 00:22:24.471 [2024-11-19T09:50:14.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.471 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3958697 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.730 rmmod nvme_tcp 00:22:24.730 rmmod nvme_fabrics 00:22:24.730 rmmod nvme_keyring 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3958444 ']' 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3958444 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3958444 ']' 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3958444 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3958444 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3958444' 00:22:24.730 killing process with pid 3958444 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3958444 00:22:24.730 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3958444 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.990 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.557 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:27.557 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.tmU 00:22:27.557 00:22:27.557 real 0m21.708s 00:22:27.557 user 0m23.590s 00:22:27.557 sys 0m9.607s 00:22:27.557 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.557 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:27.557 ************************************ 00:22:27.557 END TEST nvmf_fips 00:22:27.557 ************************************ 00:22:27.557 10:50:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:27.557 10:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:27.557 10:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:27.558 ************************************ 00:22:27.558 START TEST nvmf_control_msg_list 00:22:27.558 ************************************ 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:27.558 * Looking for test storage... 00:22:27.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:27.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.558 --rc genhtml_branch_coverage=1 00:22:27.558 --rc genhtml_function_coverage=1 00:22:27.558 --rc genhtml_legend=1 00:22:27.558 --rc geninfo_all_blocks=1 00:22:27.558 --rc geninfo_unexecuted_blocks=1 00:22:27.558 00:22:27.558 ' 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:27.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.558 --rc genhtml_branch_coverage=1 00:22:27.558 --rc genhtml_function_coverage=1 00:22:27.558 --rc genhtml_legend=1 00:22:27.558 --rc geninfo_all_blocks=1 00:22:27.558 --rc geninfo_unexecuted_blocks=1 00:22:27.558 00:22:27.558 ' 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:27.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.558 --rc genhtml_branch_coverage=1 00:22:27.558 --rc genhtml_function_coverage=1 00:22:27.558 --rc genhtml_legend=1 00:22:27.558 --rc geninfo_all_blocks=1 00:22:27.558 --rc geninfo_unexecuted_blocks=1 00:22:27.558 00:22:27.558 ' 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:27.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.558 --rc genhtml_branch_coverage=1 00:22:27.558 --rc genhtml_function_coverage=1 00:22:27.558 --rc genhtml_legend=1 00:22:27.558 --rc geninfo_all_blocks=1 00:22:27.558 --rc geninfo_unexecuted_blocks=1 00:22:27.558 00:22:27.558 ' 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.558 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:27.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:27.559 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.559 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:34.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:34.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:34.189 Found net devices under 0000:86:00.0: cvl_0_0 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:34.189 Found net devices under 0000:86:00.1: cvl_0_1 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:34.189 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:34.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:22:34.190 00:22:34.190 --- 10.0.0.2 ping statistics --- 00:22:34.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.190 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:22:34.190 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:22:34.190 00:22:34.190 --- 10.0.0.1 ping statistics --- 00:22:34.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.190 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:22:34.190 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.190 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:34.190 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:34.190 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.190 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:34.190 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:34.190 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.190 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:34.190 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3964079 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3964079 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3964079 ']' 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.190 [2024-11-19 10:50:23.066743] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:34.190 [2024-11-19 10:50:23.066795] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.190 [2024-11-19 10:50:23.144516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.190 [2024-11-19 10:50:23.185585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.190 [2024-11-19 10:50:23.185622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.190 [2024-11-19 10:50:23.185630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.190 [2024-11-19 10:50:23.185636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.190 [2024-11-19 10:50:23.185641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.190 [2024-11-19 10:50:23.186192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.190 [2024-11-19 10:50:23.935934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.190 Malloc0 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.190 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:34.449 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.449 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.449 [2024-11-19 10:50:23.980344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.449 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.449 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3964320 00:22:34.449 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:34.449 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3964321 00:22:34.449 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:34.449 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3964322 00:22:34.449 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:34.449 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3964320 00:22:34.449 [2024-11-19 10:50:24.058714] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:34.449 [2024-11-19 10:50:24.068767] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:34.449 [2024-11-19 10:50:24.068917] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:35.383 Initializing NVMe Controllers 00:22:35.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:35.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:35.383 Initialization complete. Launching workers. 00:22:35.383 ======================================================== 00:22:35.383 Latency(us) 00:22:35.383 Device Information : IOPS MiB/s Average min max 00:22:35.383 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5124.00 20.02 194.81 122.04 592.65 00:22:35.383 ======================================================== 00:22:35.383 Total : 5124.00 20.02 194.81 122.04 592.65 00:22:35.383 00:22:35.642 Initializing NVMe Controllers 00:22:35.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:35.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:35.642 Initialization complete. Launching workers. 00:22:35.642 ======================================================== 00:22:35.642 Latency(us) 00:22:35.642 Device Information : IOPS MiB/s Average min max 00:22:35.642 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4430.00 17.30 225.35 126.93 41015.79 00:22:35.642 ======================================================== 00:22:35.642 Total : 4430.00 17.30 225.35 126.93 41015.79 00:22:35.642 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3964321 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3964322 00:22:35.642 Initializing NVMe Controllers 00:22:35.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:35.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:35.642 Initialization complete. Launching workers. 00:22:35.642 ======================================================== 00:22:35.642 Latency(us) 00:22:35.642 Device Information : IOPS MiB/s Average min max 00:22:35.642 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4914.00 19.20 203.11 120.94 397.41 00:22:35.642 ======================================================== 00:22:35.642 Total : 4914.00 19.20 203.11 120.94 397.41 00:22:35.642 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.642 rmmod nvme_tcp 00:22:35.642 rmmod nvme_fabrics 00:22:35.642 rmmod nvme_keyring 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3964079 ']' 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3964079 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3964079 ']' 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3964079 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3964079 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3964079' 00:22:35.642 killing process with pid 3964079 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3964079 00:22:35.642 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3964079 00:22:35.901 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.901 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.901 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.901 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:35.902 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:35.902 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.902 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.902 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.902 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.902 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.902 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.902 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:38.436 00:22:38.436 real 0m10.833s 00:22:38.436 user 0m7.238s 00:22:38.436 sys 0m5.752s 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:38.436 ************************************ 00:22:38.436 END TEST nvmf_control_msg_list 00:22:38.436 ************************************ 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:38.436 ************************************ 00:22:38.436 START TEST nvmf_wait_for_buf 00:22:38.436 ************************************ 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:38.436 * Looking for test storage... 00:22:38.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.436 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:38.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.436 --rc genhtml_branch_coverage=1 00:22:38.437 --rc genhtml_function_coverage=1 00:22:38.437 --rc genhtml_legend=1 00:22:38.437 --rc geninfo_all_blocks=1 00:22:38.437 --rc geninfo_unexecuted_blocks=1 00:22:38.437 00:22:38.437 ' 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:38.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.437 --rc genhtml_branch_coverage=1 00:22:38.437 --rc genhtml_function_coverage=1 00:22:38.437 --rc genhtml_legend=1 00:22:38.437 --rc geninfo_all_blocks=1 00:22:38.437 --rc geninfo_unexecuted_blocks=1 00:22:38.437 00:22:38.437 ' 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:38.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.437 --rc genhtml_branch_coverage=1 00:22:38.437 --rc genhtml_function_coverage=1 00:22:38.437 --rc genhtml_legend=1 00:22:38.437 --rc geninfo_all_blocks=1 00:22:38.437 --rc geninfo_unexecuted_blocks=1 00:22:38.437 00:22:38.437 ' 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:38.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.437 --rc genhtml_branch_coverage=1 00:22:38.437 --rc genhtml_function_coverage=1 00:22:38.437 --rc genhtml_legend=1 00:22:38.437 --rc geninfo_all_blocks=1 00:22:38.437 --rc geninfo_unexecuted_blocks=1 00:22:38.437 00:22:38.437 ' 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:38.437 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:45.014 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:45.015 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:45.015 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:45.015 Found net devices under 0000:86:00.0: cvl_0_0 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:45.015 Found net devices under 0000:86:00.1: cvl_0_1 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:45.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:22:45.015 00:22:45.015 --- 10.0.0.2 ping statistics --- 00:22:45.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.015 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:22:45.015 00:22:45.015 --- 10.0.0.1 ping statistics --- 00:22:45.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.015 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3968080 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3968080 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3968080 ']' 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.015 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.015 [2024-11-19 10:50:33.950473] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:45.015 [2024-11-19 10:50:33.950519] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.015 [2024-11-19 10:50:34.031750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.015 [2024-11-19 10:50:34.069498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.015 [2024-11-19 10:50:34.069531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.015 [2024-11-19 10:50:34.069538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.015 [2024-11-19 10:50:34.069543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.016 [2024-11-19 10:50:34.069548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.016 [2024-11-19 10:50:34.070116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.016 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.016 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:45.016 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.016 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.016 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.275 Malloc0 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.275 [2024-11-19 10:50:34.913690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.275 [2024-11-19 10:50:34.941887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.275 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:45.275 [2024-11-19 10:50:35.025286] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:46.654 Initializing NVMe Controllers 00:22:46.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:46.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:46.654 Initialization complete. Launching workers. 00:22:46.654 ======================================================== 00:22:46.654 Latency(us) 00:22:46.654 Device Information : IOPS MiB/s Average min max 00:22:46.654 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 130.00 16.25 31990.56 7257.44 63860.42 00:22:46.654 ======================================================== 00:22:46.654 Total : 130.00 16.25 31990.56 7257.44 63860.42 00:22:46.654 00:22:46.654 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:46.654 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:46.654 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.654 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:46.654 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2054 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2054 -eq 0 ]] 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.914 rmmod nvme_tcp 00:22:46.914 rmmod nvme_fabrics 00:22:46.914 rmmod nvme_keyring 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3968080 ']' 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3968080 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3968080 ']' 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3968080 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3968080 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3968080' 00:22:46.914 killing process with pid 3968080 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3968080 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3968080 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:46.914 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.174 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.174 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.174 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.174 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.174 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.080 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.080 00:22:49.080 real 0m11.066s 00:22:49.080 user 0m4.743s 00:22:49.080 sys 0m4.917s 00:22:49.080 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.080 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:49.080 ************************************ 00:22:49.080 END TEST nvmf_wait_for_buf 00:22:49.080 ************************************ 00:22:49.080 10:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:49.080 10:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:49.080 10:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:49.080 10:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:49.080 10:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.080 10:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:55.651 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:55.651 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:55.651 Found net devices under 0000:86:00.0: cvl_0_0 00:22:55.651 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:55.652 Found net devices under 0000:86:00.1: cvl_0_1 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:55.652 ************************************ 00:22:55.652 START TEST nvmf_perf_adq 00:22:55.652 ************************************ 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:55.652 * Looking for test storage... 00:22:55.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:55.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.652 --rc genhtml_branch_coverage=1 00:22:55.652 --rc genhtml_function_coverage=1 00:22:55.652 --rc genhtml_legend=1 00:22:55.652 --rc geninfo_all_blocks=1 00:22:55.652 --rc geninfo_unexecuted_blocks=1 00:22:55.652 00:22:55.652 ' 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:55.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.652 --rc genhtml_branch_coverage=1 00:22:55.652 --rc genhtml_function_coverage=1 00:22:55.652 --rc genhtml_legend=1 00:22:55.652 --rc geninfo_all_blocks=1 00:22:55.652 --rc geninfo_unexecuted_blocks=1 00:22:55.652 00:22:55.652 ' 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:55.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.652 --rc genhtml_branch_coverage=1 00:22:55.652 --rc genhtml_function_coverage=1 00:22:55.652 --rc genhtml_legend=1 00:22:55.652 --rc geninfo_all_blocks=1 00:22:55.652 --rc geninfo_unexecuted_blocks=1 00:22:55.652 00:22:55.652 ' 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:55.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.652 --rc genhtml_branch_coverage=1 00:22:55.652 --rc genhtml_function_coverage=1 00:22:55.652 --rc genhtml_legend=1 00:22:55.652 --rc geninfo_all_blocks=1 00:22:55.652 --rc geninfo_unexecuted_blocks=1 00:22:55.652 00:22:55.652 ' 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.652 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:55.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:55.653 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:00.932 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:00.932 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:00.932 Found net devices under 0000:86:00.0: cvl_0_0 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:00.932 Found net devices under 0000:86:00.1: cvl_0_1 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:00.932 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:00.933 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:01.871 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:04.410 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.686 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:09.687 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:09.687 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:09.687 Found net devices under 0000:86:00.0: cvl_0_0 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:09.687 Found net devices under 0000:86:00.1: cvl_0_1 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:23:09.687 00:23:09.687 --- 10.0.0.2 ping statistics --- 00:23:09.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.687 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:23:09.687 00:23:09.687 --- 10.0.0.1 ping statistics --- 00:23:09.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.687 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3976425 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3976425 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3976425 ']' 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.687 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.687 [2024-11-19 10:50:59.004103] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:23:09.687 [2024-11-19 10:50:59.004158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.687 [2024-11-19 10:50:59.082633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.687 [2024-11-19 10:50:59.124768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.687 [2024-11-19 10:50:59.124807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.687 [2024-11-19 10:50:59.124815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.687 [2024-11-19 10:50:59.124820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.687 [2024-11-19 10:50:59.124825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.687 [2024-11-19 10:50:59.126281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.687 [2024-11-19 10:50:59.126389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.687 [2024-11-19 10:50:59.126481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.687 [2024-11-19 10:50:59.126482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.252 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.252 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:10.252 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.252 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.252 [2024-11-19 10:51:00.008579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.252 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.252 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:10.252 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.252 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.510 Malloc1 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.510 [2024-11-19 10:51:00.072106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3976692 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:23:10.510 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:12.410 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:23:12.410 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.410 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:12.410 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.410 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:23:12.410 "tick_rate": 2100000000, 00:23:12.410 "poll_groups": [ 00:23:12.410 { 00:23:12.410 "name": "nvmf_tgt_poll_group_000", 00:23:12.410 "admin_qpairs": 1, 00:23:12.410 "io_qpairs": 1, 00:23:12.410 "current_admin_qpairs": 1, 00:23:12.410 "current_io_qpairs": 1, 00:23:12.410 "pending_bdev_io": 0, 00:23:12.410 "completed_nvme_io": 20793, 00:23:12.410 "transports": [ 00:23:12.410 { 00:23:12.410 "trtype": "TCP" 00:23:12.410 } 00:23:12.410 ] 00:23:12.410 }, 00:23:12.410 { 00:23:12.410 "name": "nvmf_tgt_poll_group_001", 00:23:12.410 "admin_qpairs": 0, 00:23:12.410 "io_qpairs": 1, 00:23:12.410 "current_admin_qpairs": 0, 00:23:12.410 "current_io_qpairs": 1, 00:23:12.410 "pending_bdev_io": 0, 00:23:12.410 "completed_nvme_io": 20818, 00:23:12.410 "transports": [ 00:23:12.410 { 00:23:12.410 "trtype": "TCP" 00:23:12.410 } 00:23:12.410 ] 00:23:12.410 }, 00:23:12.410 { 00:23:12.410 "name": "nvmf_tgt_poll_group_002", 00:23:12.410 "admin_qpairs": 0, 00:23:12.410 "io_qpairs": 1, 00:23:12.410 "current_admin_qpairs": 0, 00:23:12.410 "current_io_qpairs": 1, 00:23:12.410 "pending_bdev_io": 0, 00:23:12.410 "completed_nvme_io": 20397, 00:23:12.410 "transports": [ 00:23:12.410 { 00:23:12.410 "trtype": "TCP" 00:23:12.410 } 00:23:12.410 ] 00:23:12.410 }, 00:23:12.410 { 00:23:12.410 "name": "nvmf_tgt_poll_group_003", 00:23:12.410 "admin_qpairs": 0, 00:23:12.410 "io_qpairs": 1, 00:23:12.410 "current_admin_qpairs": 0, 00:23:12.410 "current_io_qpairs": 1, 00:23:12.410 "pending_bdev_io": 0, 00:23:12.410 "completed_nvme_io": 20437, 00:23:12.410 "transports": [ 00:23:12.410 { 00:23:12.410 "trtype": "TCP" 00:23:12.410 } 00:23:12.410 ] 00:23:12.410 } 00:23:12.410 ] 00:23:12.410 }' 00:23:12.410 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:12.410 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:23:12.410 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:23:12.410 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:23:12.410 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3976692 00:23:20.521 Initializing NVMe Controllers 00:23:20.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:20.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:20.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:20.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:20.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:20.521 Initialization complete. Launching workers. 00:23:20.521 ======================================================== 00:23:20.521 Latency(us) 00:23:20.521 Device Information : IOPS MiB/s Average min max 00:23:20.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10662.59 41.65 6003.81 2411.68 9899.99 00:23:20.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10906.79 42.60 5867.57 2247.60 13354.18 00:23:20.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10750.99 42.00 5954.01 2083.98 10324.01 00:23:20.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10804.09 42.20 5922.86 2181.44 10405.57 00:23:20.521 ======================================================== 00:23:20.521 Total : 43124.46 168.45 5936.66 2083.98 13354.18 00:23:20.521 00:23:20.521 [2024-11-19 10:51:10.199551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a0520 is same with the state(6) to be set 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.521 rmmod nvme_tcp 00:23:20.521 rmmod nvme_fabrics 00:23:20.521 rmmod nvme_keyring 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3976425 ']' 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3976425 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3976425 ']' 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3976425 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.521 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3976425 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3976425' 00:23:20.780 killing process with pid 3976425 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3976425 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3976425 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.780 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.315 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:23.315 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:23.315 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:23.315 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:24.259 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:26.167 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:31.445 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:31.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:31.445 Found net devices under 0000:86:00.0: cvl_0_0 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:31.445 Found net devices under 0000:86:00.1: cvl_0_1 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:31.445 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.446 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.446 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:31.446 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:31.446 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.446 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.446 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.446 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.446 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:31.446 10:51:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:31.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:23:31.446 00:23:31.446 --- 10.0.0.2 ping statistics --- 00:23:31.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.446 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:23:31.446 00:23:31.446 --- 10.0.0.1 ping statistics --- 00:23:31.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.446 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:31.446 net.core.busy_poll = 1 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:31.446 net.core.busy_read = 1 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:31.446 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3980967 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3980967 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3980967 ']' 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.705 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.705 [2024-11-19 10:51:21.399831] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:23:31.705 [2024-11-19 10:51:21.399876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.705 [2024-11-19 10:51:21.478287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.965 [2024-11-19 10:51:21.519240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.965 [2024-11-19 10:51:21.519272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.965 [2024-11-19 10:51:21.519279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.965 [2024-11-19 10:51:21.519284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.965 [2024-11-19 10:51:21.519289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.965 [2024-11-19 10:51:21.520885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.965 [2024-11-19 10:51:21.520992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.965 [2024-11-19 10:51:21.521091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.965 [2024-11-19 10:51:21.521092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.530 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.789 [2024-11-19 10:51:22.412374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.789 Malloc1 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.789 [2024-11-19 10:51:22.476486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3981219 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:32.789 10:51:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:35.316 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:35.316 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.316 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:35.316 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.316 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:35.316 "tick_rate": 2100000000, 00:23:35.316 "poll_groups": [ 00:23:35.316 { 00:23:35.316 "name": "nvmf_tgt_poll_group_000", 00:23:35.316 "admin_qpairs": 1, 00:23:35.317 "io_qpairs": 2, 00:23:35.317 "current_admin_qpairs": 1, 00:23:35.317 "current_io_qpairs": 2, 00:23:35.317 "pending_bdev_io": 0, 00:23:35.317 "completed_nvme_io": 27612, 00:23:35.317 "transports": [ 00:23:35.317 { 00:23:35.317 "trtype": "TCP" 00:23:35.317 } 00:23:35.317 ] 00:23:35.317 }, 00:23:35.317 { 00:23:35.317 "name": "nvmf_tgt_poll_group_001", 00:23:35.317 "admin_qpairs": 0, 00:23:35.317 "io_qpairs": 2, 00:23:35.317 "current_admin_qpairs": 0, 00:23:35.317 "current_io_qpairs": 2, 00:23:35.317 "pending_bdev_io": 0, 00:23:35.317 "completed_nvme_io": 29313, 00:23:35.317 "transports": [ 00:23:35.317 { 00:23:35.317 "trtype": "TCP" 00:23:35.317 } 00:23:35.317 ] 00:23:35.317 }, 00:23:35.317 { 00:23:35.317 "name": "nvmf_tgt_poll_group_002", 00:23:35.317 "admin_qpairs": 0, 00:23:35.317 "io_qpairs": 0, 00:23:35.317 "current_admin_qpairs": 0, 00:23:35.317 "current_io_qpairs": 0, 00:23:35.317 "pending_bdev_io": 0, 00:23:35.317 "completed_nvme_io": 0, 00:23:35.317 "transports": [ 00:23:35.317 { 00:23:35.317 "trtype": "TCP" 00:23:35.317 } 00:23:35.317 ] 00:23:35.317 }, 00:23:35.317 { 00:23:35.317 "name": "nvmf_tgt_poll_group_003", 00:23:35.317 "admin_qpairs": 0, 00:23:35.317 "io_qpairs": 0, 00:23:35.317 "current_admin_qpairs": 0, 00:23:35.317 "current_io_qpairs": 0, 00:23:35.317 "pending_bdev_io": 0, 00:23:35.317 "completed_nvme_io": 0, 00:23:35.317 "transports": [ 00:23:35.317 { 00:23:35.317 "trtype": "TCP" 00:23:35.317 } 00:23:35.317 ] 00:23:35.317 } 00:23:35.317 ] 00:23:35.317 }' 00:23:35.317 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:35.317 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:35.317 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:35.317 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:35.317 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3981219 00:23:43.550 Initializing NVMe Controllers 00:23:43.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:43.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:43.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:43.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:43.550 Initialization complete. Launching workers. 00:23:43.550 ======================================================== 00:23:43.550 Latency(us) 00:23:43.550 Device Information : IOPS MiB/s Average min max 00:23:43.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7323.30 28.61 8748.98 1419.73 52437.66 00:23:43.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7583.60 29.62 8466.02 1576.51 52383.01 00:23:43.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7590.80 29.65 8433.19 1250.59 52632.80 00:23:43.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7909.10 30.89 8100.84 1146.55 52859.56 00:23:43.550 ======================================================== 00:23:43.550 Total : 30406.80 118.78 8430.98 1146.55 52859.56 00:23:43.550 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.550 rmmod nvme_tcp 00:23:43.550 rmmod nvme_fabrics 00:23:43.550 rmmod nvme_keyring 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3980967 ']' 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3980967 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3980967 ']' 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3980967 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3980967 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3980967' 00:23:43.550 killing process with pid 3980967 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3980967 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3980967 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.550 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:46.842 00:23:46.842 real 0m51.537s 00:23:46.842 user 2m49.532s 00:23:46.842 sys 0m10.292s 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.842 ************************************ 00:23:46.842 END TEST nvmf_perf_adq 00:23:46.842 ************************************ 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:46.842 ************************************ 00:23:46.842 START TEST nvmf_shutdown 00:23:46.842 ************************************ 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:46.842 * Looking for test storage... 00:23:46.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:46.842 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:46.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.843 --rc genhtml_branch_coverage=1 00:23:46.843 --rc genhtml_function_coverage=1 00:23:46.843 --rc genhtml_legend=1 00:23:46.843 --rc geninfo_all_blocks=1 00:23:46.843 --rc geninfo_unexecuted_blocks=1 00:23:46.843 00:23:46.843 ' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:46.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.843 --rc genhtml_branch_coverage=1 00:23:46.843 --rc genhtml_function_coverage=1 00:23:46.843 --rc genhtml_legend=1 00:23:46.843 --rc geninfo_all_blocks=1 00:23:46.843 --rc geninfo_unexecuted_blocks=1 00:23:46.843 00:23:46.843 ' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:46.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.843 --rc genhtml_branch_coverage=1 00:23:46.843 --rc genhtml_function_coverage=1 00:23:46.843 --rc genhtml_legend=1 00:23:46.843 --rc geninfo_all_blocks=1 00:23:46.843 --rc geninfo_unexecuted_blocks=1 00:23:46.843 00:23:46.843 ' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:46.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.843 --rc genhtml_branch_coverage=1 00:23:46.843 --rc genhtml_function_coverage=1 00:23:46.843 --rc genhtml_legend=1 00:23:46.843 --rc geninfo_all_blocks=1 00:23:46.843 --rc geninfo_unexecuted_blocks=1 00:23:46.843 00:23:46.843 ' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:46.843 ************************************ 00:23:46.843 START TEST nvmf_shutdown_tc1 00:23:46.843 ************************************ 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.843 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.844 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:46.844 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:46.844 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.844 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:53.420 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:53.420 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:53.420 Found net devices under 0000:86:00.0: cvl_0_0 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.420 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:53.421 Found net devices under 0000:86:00.1: cvl_0_1 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:23:53.421 00:23:53.421 --- 10.0.0.2 ping statistics --- 00:23:53.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.421 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:23:53.421 00:23:53.421 --- 10.0.0.1 ping statistics --- 00:23:53.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.421 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3986589 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3986589 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3986589 ']' 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.421 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.421 [2024-11-19 10:51:42.447493] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:23:53.421 [2024-11-19 10:51:42.447542] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.421 [2024-11-19 10:51:42.527828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.421 [2024-11-19 10:51:42.569837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.421 [2024-11-19 10:51:42.569874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.421 [2024-11-19 10:51:42.569881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.421 [2024-11-19 10:51:42.569887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.421 [2024-11-19 10:51:42.569892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.421 [2024-11-19 10:51:42.571369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.421 [2024-11-19 10:51:42.571478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.421 [2024-11-19 10:51:42.571581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.421 [2024-11-19 10:51:42.571582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.680 [2024-11-19 10:51:43.326185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.680 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:53.681 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.681 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:53.681 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.681 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:53.681 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.681 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:53.681 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.681 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:53.681 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:53.681 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.681 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.681 Malloc1 00:23:53.681 [2024-11-19 10:51:43.435330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.681 Malloc2 00:23:53.939 Malloc3 00:23:53.939 Malloc4 00:23:53.939 Malloc5 00:23:53.939 Malloc6 00:23:53.939 Malloc7 00:23:53.939 Malloc8 00:23:54.198 Malloc9 00:23:54.199 Malloc10 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3986898 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3986898 /var/tmp/bdevperf.sock 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3986898 ']' 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:54.199 { 00:23:54.199 "params": { 00:23:54.199 "name": "Nvme$subsystem", 00:23:54.199 "trtype": "$TEST_TRANSPORT", 00:23:54.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.199 "adrfam": "ipv4", 00:23:54.199 "trsvcid": "$NVMF_PORT", 00:23:54.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.199 "hdgst": ${hdgst:-false}, 00:23:54.199 "ddgst": ${ddgst:-false} 00:23:54.199 }, 00:23:54.199 "method": "bdev_nvme_attach_controller" 00:23:54.199 } 00:23:54.199 EOF 00:23:54.199 )") 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:54.199 { 00:23:54.199 "params": { 00:23:54.199 "name": "Nvme$subsystem", 00:23:54.199 "trtype": "$TEST_TRANSPORT", 00:23:54.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.199 "adrfam": "ipv4", 00:23:54.199 "trsvcid": "$NVMF_PORT", 00:23:54.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.199 "hdgst": ${hdgst:-false}, 00:23:54.199 "ddgst": ${ddgst:-false} 00:23:54.199 }, 00:23:54.199 "method": "bdev_nvme_attach_controller" 00:23:54.199 } 00:23:54.199 EOF 00:23:54.199 )") 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:54.199 { 00:23:54.199 "params": { 00:23:54.199 "name": "Nvme$subsystem", 00:23:54.199 "trtype": "$TEST_TRANSPORT", 00:23:54.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.199 "adrfam": "ipv4", 00:23:54.199 "trsvcid": "$NVMF_PORT", 00:23:54.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.199 "hdgst": ${hdgst:-false}, 00:23:54.199 "ddgst": ${ddgst:-false} 00:23:54.199 }, 00:23:54.199 "method": "bdev_nvme_attach_controller" 00:23:54.199 } 00:23:54.199 EOF 00:23:54.199 )") 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:54.199 { 00:23:54.199 "params": { 00:23:54.199 "name": "Nvme$subsystem", 00:23:54.199 "trtype": "$TEST_TRANSPORT", 00:23:54.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.199 "adrfam": "ipv4", 00:23:54.199 "trsvcid": "$NVMF_PORT", 00:23:54.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.199 "hdgst": ${hdgst:-false}, 00:23:54.199 "ddgst": ${ddgst:-false} 00:23:54.199 }, 00:23:54.199 "method": "bdev_nvme_attach_controller" 00:23:54.199 } 00:23:54.199 EOF 00:23:54.199 )") 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:54.199 { 00:23:54.199 "params": { 00:23:54.199 "name": "Nvme$subsystem", 00:23:54.199 "trtype": "$TEST_TRANSPORT", 00:23:54.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.199 "adrfam": "ipv4", 00:23:54.199 "trsvcid": "$NVMF_PORT", 00:23:54.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.199 "hdgst": ${hdgst:-false}, 00:23:54.199 "ddgst": ${ddgst:-false} 00:23:54.199 }, 00:23:54.199 "method": "bdev_nvme_attach_controller" 00:23:54.199 } 00:23:54.199 EOF 00:23:54.199 )") 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:54.199 { 00:23:54.199 "params": { 00:23:54.199 "name": "Nvme$subsystem", 00:23:54.199 "trtype": "$TEST_TRANSPORT", 00:23:54.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.199 "adrfam": "ipv4", 00:23:54.199 "trsvcid": "$NVMF_PORT", 00:23:54.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.199 "hdgst": ${hdgst:-false}, 00:23:54.199 "ddgst": ${ddgst:-false} 00:23:54.199 }, 00:23:54.199 "method": "bdev_nvme_attach_controller" 00:23:54.199 } 00:23:54.199 EOF 00:23:54.199 )") 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:54.199 [2024-11-19 10:51:43.907112] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:23:54.199 [2024-11-19 10:51:43.907167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:54.199 { 00:23:54.199 "params": { 00:23:54.199 "name": "Nvme$subsystem", 00:23:54.199 "trtype": "$TEST_TRANSPORT", 00:23:54.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.199 "adrfam": "ipv4", 00:23:54.199 "trsvcid": "$NVMF_PORT", 00:23:54.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.199 "hdgst": ${hdgst:-false}, 00:23:54.199 "ddgst": ${ddgst:-false} 00:23:54.199 }, 00:23:54.199 "method": "bdev_nvme_attach_controller" 00:23:54.199 } 00:23:54.199 EOF 00:23:54.199 )") 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:54.199 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:54.199 { 00:23:54.199 "params": { 00:23:54.199 "name": "Nvme$subsystem", 00:23:54.199 "trtype": "$TEST_TRANSPORT", 00:23:54.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.199 "adrfam": "ipv4", 00:23:54.199 "trsvcid": "$NVMF_PORT", 00:23:54.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.199 "hdgst": ${hdgst:-false}, 00:23:54.199 "ddgst": ${ddgst:-false} 00:23:54.199 }, 00:23:54.199 "method": "bdev_nvme_attach_controller" 00:23:54.199 } 00:23:54.199 EOF 00:23:54.199 )") 00:23:54.200 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:54.200 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:54.200 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:54.200 { 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme$subsystem", 00:23:54.200 "trtype": "$TEST_TRANSPORT", 00:23:54.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "$NVMF_PORT", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.200 "hdgst": ${hdgst:-false}, 00:23:54.200 "ddgst": ${ddgst:-false} 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 } 00:23:54.200 EOF 00:23:54.200 )") 00:23:54.200 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:54.200 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:54.200 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:54.200 { 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme$subsystem", 00:23:54.200 "trtype": "$TEST_TRANSPORT", 00:23:54.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "$NVMF_PORT", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.200 "hdgst": ${hdgst:-false}, 00:23:54.200 "ddgst": ${ddgst:-false} 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 } 00:23:54.200 EOF 00:23:54.200 )") 00:23:54.200 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:54.200 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:54.200 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:54.200 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme1", 00:23:54.200 "trtype": "tcp", 00:23:54.200 "traddr": "10.0.0.2", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "4420", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.200 "hdgst": false, 00:23:54.200 "ddgst": false 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 },{ 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme2", 00:23:54.200 "trtype": "tcp", 00:23:54.200 "traddr": "10.0.0.2", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "4420", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:54.200 "hdgst": false, 00:23:54.200 "ddgst": false 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 },{ 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme3", 00:23:54.200 "trtype": "tcp", 00:23:54.200 "traddr": "10.0.0.2", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "4420", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:54.200 "hdgst": false, 00:23:54.200 "ddgst": false 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 },{ 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme4", 00:23:54.200 "trtype": "tcp", 00:23:54.200 "traddr": "10.0.0.2", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "4420", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:54.200 "hdgst": false, 00:23:54.200 "ddgst": false 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 },{ 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme5", 00:23:54.200 "trtype": "tcp", 00:23:54.200 "traddr": "10.0.0.2", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "4420", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:54.200 "hdgst": false, 00:23:54.200 "ddgst": false 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 },{ 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme6", 00:23:54.200 "trtype": "tcp", 00:23:54.200 "traddr": "10.0.0.2", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "4420", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:54.200 "hdgst": false, 00:23:54.200 "ddgst": false 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 },{ 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme7", 00:23:54.200 "trtype": "tcp", 00:23:54.200 "traddr": "10.0.0.2", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "4420", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:54.200 "hdgst": false, 00:23:54.200 "ddgst": false 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 },{ 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme8", 00:23:54.200 "trtype": "tcp", 00:23:54.200 "traddr": "10.0.0.2", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "4420", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:54.200 "hdgst": false, 00:23:54.200 "ddgst": false 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 },{ 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme9", 00:23:54.200 "trtype": "tcp", 00:23:54.200 "traddr": "10.0.0.2", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "4420", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:54.200 "hdgst": false, 00:23:54.200 "ddgst": false 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 },{ 00:23:54.200 "params": { 00:23:54.200 "name": "Nvme10", 00:23:54.200 "trtype": "tcp", 00:23:54.200 "traddr": "10.0.0.2", 00:23:54.200 "adrfam": "ipv4", 00:23:54.200 "trsvcid": "4420", 00:23:54.200 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:54.200 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:54.200 "hdgst": false, 00:23:54.200 "ddgst": false 00:23:54.200 }, 00:23:54.200 "method": "bdev_nvme_attach_controller" 00:23:54.200 }' 00:23:54.200 [2024-11-19 10:51:43.984699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.459 [2024-11-19 10:51:44.026142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.360 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.360 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:56.360 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:56.360 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.360 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:56.360 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.360 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3986898 00:23:56.360 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:56.360 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:57.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3986898 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3986589 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:57.298 { 00:23:57.298 "params": { 00:23:57.298 "name": "Nvme$subsystem", 00:23:57.298 "trtype": "$TEST_TRANSPORT", 00:23:57.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.298 "adrfam": "ipv4", 00:23:57.298 "trsvcid": "$NVMF_PORT", 00:23:57.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.298 "hdgst": ${hdgst:-false}, 00:23:57.298 "ddgst": ${ddgst:-false} 00:23:57.298 }, 00:23:57.298 "method": "bdev_nvme_attach_controller" 00:23:57.298 } 00:23:57.298 EOF 00:23:57.298 )") 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:57.298 { 00:23:57.298 "params": { 00:23:57.298 "name": "Nvme$subsystem", 00:23:57.298 "trtype": "$TEST_TRANSPORT", 00:23:57.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.298 "adrfam": "ipv4", 00:23:57.298 "trsvcid": "$NVMF_PORT", 00:23:57.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.298 "hdgst": ${hdgst:-false}, 00:23:57.298 "ddgst": ${ddgst:-false} 00:23:57.298 }, 00:23:57.298 "method": "bdev_nvme_attach_controller" 00:23:57.298 } 00:23:57.298 EOF 00:23:57.298 )") 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:57.298 { 00:23:57.298 "params": { 00:23:57.298 "name": "Nvme$subsystem", 00:23:57.298 "trtype": "$TEST_TRANSPORT", 00:23:57.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.298 "adrfam": "ipv4", 00:23:57.298 "trsvcid": "$NVMF_PORT", 00:23:57.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.298 "hdgst": ${hdgst:-false}, 00:23:57.298 "ddgst": ${ddgst:-false} 00:23:57.298 }, 00:23:57.298 "method": "bdev_nvme_attach_controller" 00:23:57.298 } 00:23:57.298 EOF 00:23:57.298 )") 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:57.298 { 00:23:57.298 "params": { 00:23:57.298 "name": "Nvme$subsystem", 00:23:57.298 "trtype": "$TEST_TRANSPORT", 00:23:57.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.298 "adrfam": "ipv4", 00:23:57.298 "trsvcid": "$NVMF_PORT", 00:23:57.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.298 "hdgst": ${hdgst:-false}, 00:23:57.298 "ddgst": ${ddgst:-false} 00:23:57.298 }, 00:23:57.298 "method": "bdev_nvme_attach_controller" 00:23:57.298 } 00:23:57.298 EOF 00:23:57.298 )") 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:57.298 { 00:23:57.298 "params": { 00:23:57.298 "name": "Nvme$subsystem", 00:23:57.298 "trtype": "$TEST_TRANSPORT", 00:23:57.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.298 "adrfam": "ipv4", 00:23:57.298 "trsvcid": "$NVMF_PORT", 00:23:57.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.298 "hdgst": ${hdgst:-false}, 00:23:57.298 "ddgst": ${ddgst:-false} 00:23:57.298 }, 00:23:57.298 "method": "bdev_nvme_attach_controller" 00:23:57.298 } 00:23:57.298 EOF 00:23:57.298 )") 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:57.298 { 00:23:57.298 "params": { 00:23:57.298 "name": "Nvme$subsystem", 00:23:57.298 "trtype": "$TEST_TRANSPORT", 00:23:57.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.298 "adrfam": "ipv4", 00:23:57.298 "trsvcid": "$NVMF_PORT", 00:23:57.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.298 "hdgst": ${hdgst:-false}, 00:23:57.298 "ddgst": ${ddgst:-false} 00:23:57.298 }, 00:23:57.298 "method": "bdev_nvme_attach_controller" 00:23:57.298 } 00:23:57.298 EOF 00:23:57.298 )") 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:57.298 { 00:23:57.298 "params": { 00:23:57.298 "name": "Nvme$subsystem", 00:23:57.298 "trtype": "$TEST_TRANSPORT", 00:23:57.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.298 "adrfam": "ipv4", 00:23:57.298 "trsvcid": "$NVMF_PORT", 00:23:57.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.298 "hdgst": ${hdgst:-false}, 00:23:57.298 "ddgst": ${ddgst:-false} 00:23:57.298 }, 00:23:57.298 "method": "bdev_nvme_attach_controller" 00:23:57.298 } 00:23:57.298 EOF 00:23:57.298 )") 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:57.298 [2024-11-19 10:51:46.840061] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:23:57.298 [2024-11-19 10:51:46.840110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3987447 ] 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:57.298 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:57.298 { 00:23:57.298 "params": { 00:23:57.298 "name": "Nvme$subsystem", 00:23:57.298 "trtype": "$TEST_TRANSPORT", 00:23:57.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.298 "adrfam": "ipv4", 00:23:57.298 "trsvcid": "$NVMF_PORT", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.299 "hdgst": ${hdgst:-false}, 00:23:57.299 "ddgst": ${ddgst:-false} 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 } 00:23:57.299 EOF 00:23:57.299 )") 00:23:57.299 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:57.299 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:57.299 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:57.299 { 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme$subsystem", 00:23:57.299 "trtype": "$TEST_TRANSPORT", 00:23:57.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "$NVMF_PORT", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.299 "hdgst": ${hdgst:-false}, 00:23:57.299 "ddgst": ${ddgst:-false} 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 } 00:23:57.299 EOF 00:23:57.299 )") 00:23:57.299 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:57.299 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:57.299 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:57.299 { 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme$subsystem", 00:23:57.299 "trtype": "$TEST_TRANSPORT", 00:23:57.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "$NVMF_PORT", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.299 "hdgst": ${hdgst:-false}, 00:23:57.299 "ddgst": ${ddgst:-false} 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 } 00:23:57.299 EOF 00:23:57.299 )") 00:23:57.299 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:57.299 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:57.299 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:57.299 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme1", 00:23:57.299 "trtype": "tcp", 00:23:57.299 "traddr": "10.0.0.2", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "4420", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.299 "hdgst": false, 00:23:57.299 "ddgst": false 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 },{ 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme2", 00:23:57.299 "trtype": "tcp", 00:23:57.299 "traddr": "10.0.0.2", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "4420", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:57.299 "hdgst": false, 00:23:57.299 "ddgst": false 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 },{ 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme3", 00:23:57.299 "trtype": "tcp", 00:23:57.299 "traddr": "10.0.0.2", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "4420", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:57.299 "hdgst": false, 00:23:57.299 "ddgst": false 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 },{ 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme4", 00:23:57.299 "trtype": "tcp", 00:23:57.299 "traddr": "10.0.0.2", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "4420", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:57.299 "hdgst": false, 00:23:57.299 "ddgst": false 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 },{ 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme5", 00:23:57.299 "trtype": "tcp", 00:23:57.299 "traddr": "10.0.0.2", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "4420", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:57.299 "hdgst": false, 00:23:57.299 "ddgst": false 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 },{ 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme6", 00:23:57.299 "trtype": "tcp", 00:23:57.299 "traddr": "10.0.0.2", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "4420", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:57.299 "hdgst": false, 00:23:57.299 "ddgst": false 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 },{ 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme7", 00:23:57.299 "trtype": "tcp", 00:23:57.299 "traddr": "10.0.0.2", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "4420", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:57.299 "hdgst": false, 00:23:57.299 "ddgst": false 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 },{ 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme8", 00:23:57.299 "trtype": "tcp", 00:23:57.299 "traddr": "10.0.0.2", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "4420", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:57.299 "hdgst": false, 00:23:57.299 "ddgst": false 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 },{ 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme9", 00:23:57.299 "trtype": "tcp", 00:23:57.299 "traddr": "10.0.0.2", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "4420", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:57.299 "hdgst": false, 00:23:57.299 "ddgst": false 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 },{ 00:23:57.299 "params": { 00:23:57.299 "name": "Nvme10", 00:23:57.299 "trtype": "tcp", 00:23:57.299 "traddr": "10.0.0.2", 00:23:57.299 "adrfam": "ipv4", 00:23:57.299 "trsvcid": "4420", 00:23:57.299 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:57.299 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:57.299 "hdgst": false, 00:23:57.299 "ddgst": false 00:23:57.299 }, 00:23:57.299 "method": "bdev_nvme_attach_controller" 00:23:57.299 }' 00:23:57.299 [2024-11-19 10:51:46.917960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.299 [2024-11-19 10:51:46.959098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.676 Running I/O for 1 seconds... 00:23:59.872 2263.00 IOPS, 141.44 MiB/s 00:23:59.872 Latency(us) 00:23:59.872 [2024-11-19T09:51:49.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.872 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.872 Verification LBA range: start 0x0 length 0x400 00:23:59.872 Nvme1n1 : 1.14 280.93 17.56 0.00 0.00 225624.06 18474.91 212711.13 00:23:59.872 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.872 Verification LBA range: start 0x0 length 0x400 00:23:59.872 Nvme2n1 : 1.14 281.83 17.61 0.00 0.00 221631.44 18350.08 205720.62 00:23:59.872 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.872 Verification LBA range: start 0x0 length 0x400 00:23:59.872 Nvme3n1 : 1.10 296.01 18.50 0.00 0.00 206958.85 9299.87 201726.05 00:23:59.872 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.872 Verification LBA range: start 0x0 length 0x400 00:23:59.872 Nvme4n1 : 1.12 288.68 18.04 0.00 0.00 206881.09 6459.98 211712.49 00:23:59.872 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.872 Verification LBA range: start 0x0 length 0x400 00:23:59.872 Nvme5n1 : 1.13 282.66 17.67 0.00 0.00 211748.67 15728.64 212711.13 00:23:59.872 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.872 Verification LBA range: start 0x0 length 0x400 00:23:59.872 Nvme6n1 : 1.09 234.86 14.68 0.00 0.00 250409.45 17476.27 221698.93 00:23:59.872 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.872 Verification LBA range: start 0x0 length 0x400 00:23:59.872 Nvme7n1 : 1.14 280.30 17.52 0.00 0.00 207522.38 13668.94 222697.57 00:23:59.872 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.872 Verification LBA range: start 0x0 length 0x400 00:23:59.872 Nvme8n1 : 1.15 279.16 17.45 0.00 0.00 205349.35 12982.37 230686.72 00:23:59.872 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.872 Verification LBA range: start 0x0 length 0x400 00:23:59.872 Nvme9n1 : 1.15 281.77 17.61 0.00 0.00 200418.87 1607.19 214708.42 00:23:59.872 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.872 Verification LBA range: start 0x0 length 0x400 00:23:59.872 Nvme10n1 : 1.15 277.50 17.34 0.00 0.00 200666.84 12732.71 230686.72 00:23:59.872 [2024-11-19T09:51:49.664Z] =================================================================================================================== 00:23:59.872 [2024-11-19T09:51:49.664Z] Total : 2783.71 173.98 0.00 0.00 212939.13 1607.19 230686.72 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:00.132 rmmod nvme_tcp 00:24:00.132 rmmod nvme_fabrics 00:24:00.132 rmmod nvme_keyring 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3986589 ']' 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3986589 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3986589 ']' 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3986589 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3986589 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3986589' 00:24:00.132 killing process with pid 3986589 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3986589 00:24:00.132 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3986589 00:24:00.392 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:00.392 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:00.392 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:00.392 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:00.392 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:24:00.392 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:00.392 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:24:00.392 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:00.651 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:00.651 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.651 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.651 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:02.556 00:24:02.556 real 0m15.881s 00:24:02.556 user 0m36.408s 00:24:02.556 sys 0m5.838s 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:02.556 ************************************ 00:24:02.556 END TEST nvmf_shutdown_tc1 00:24:02.556 ************************************ 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:02.556 ************************************ 00:24:02.556 START TEST nvmf_shutdown_tc2 00:24:02.556 ************************************ 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.556 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:02.557 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:02.557 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.557 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:02.817 Found net devices under 0000:86:00.0: cvl_0_0 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:02.817 Found net devices under 0000:86:00.1: cvl_0_1 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.817 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:24:02.818 00:24:02.818 --- 10.0.0.2 ping statistics --- 00:24:02.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.818 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:24:02.818 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:24:02.818 00:24:02.818 --- 10.0.0.1 ping statistics --- 00:24:02.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.818 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:24:03.077 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.077 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3988469 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3988469 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3988469 ']' 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.078 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.078 [2024-11-19 10:51:52.713104] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:03.078 [2024-11-19 10:51:52.713162] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.078 [2024-11-19 10:51:52.783011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.078 [2024-11-19 10:51:52.824975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.078 [2024-11-19 10:51:52.825015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.078 [2024-11-19 10:51:52.825022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.078 [2024-11-19 10:51:52.825028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.078 [2024-11-19 10:51:52.825034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.078 [2024-11-19 10:51:52.826469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.078 [2024-11-19 10:51:52.826577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.078 [2024-11-19 10:51:52.826685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.078 [2024-11-19 10:51:52.826686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.336 [2024-11-19 10:51:52.969995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.336 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.336 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.336 Malloc1 00:24:03.336 [2024-11-19 10:51:53.073370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.336 Malloc2 00:24:03.594 Malloc3 00:24:03.594 Malloc4 00:24:03.594 Malloc5 00:24:03.594 Malloc6 00:24:03.594 Malloc7 00:24:03.594 Malloc8 00:24:03.853 Malloc9 00:24:03.853 Malloc10 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3988533 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3988533 /var/tmp/bdevperf.sock 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3988533 ']' 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.853 { 00:24:03.853 "params": { 00:24:03.853 "name": "Nvme$subsystem", 00:24:03.853 "trtype": "$TEST_TRANSPORT", 00:24:03.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.853 "adrfam": "ipv4", 00:24:03.853 "trsvcid": "$NVMF_PORT", 00:24:03.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.853 "hdgst": ${hdgst:-false}, 00:24:03.853 "ddgst": ${ddgst:-false} 00:24:03.853 }, 00:24:03.853 "method": "bdev_nvme_attach_controller" 00:24:03.853 } 00:24:03.853 EOF 00:24:03.853 )") 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.853 { 00:24:03.853 "params": { 00:24:03.853 "name": "Nvme$subsystem", 00:24:03.853 "trtype": "$TEST_TRANSPORT", 00:24:03.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.853 "adrfam": "ipv4", 00:24:03.853 "trsvcid": "$NVMF_PORT", 00:24:03.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.853 "hdgst": ${hdgst:-false}, 00:24:03.853 "ddgst": ${ddgst:-false} 00:24:03.853 }, 00:24:03.853 "method": "bdev_nvme_attach_controller" 00:24:03.853 } 00:24:03.853 EOF 00:24:03.853 )") 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.853 { 00:24:03.853 "params": { 00:24:03.853 "name": "Nvme$subsystem", 00:24:03.853 "trtype": "$TEST_TRANSPORT", 00:24:03.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.853 "adrfam": "ipv4", 00:24:03.853 "trsvcid": "$NVMF_PORT", 00:24:03.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.853 "hdgst": ${hdgst:-false}, 00:24:03.853 "ddgst": ${ddgst:-false} 00:24:03.853 }, 00:24:03.853 "method": "bdev_nvme_attach_controller" 00:24:03.853 } 00:24:03.853 EOF 00:24:03.853 )") 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.853 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.853 { 00:24:03.853 "params": { 00:24:03.853 "name": "Nvme$subsystem", 00:24:03.853 "trtype": "$TEST_TRANSPORT", 00:24:03.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.853 "adrfam": "ipv4", 00:24:03.853 "trsvcid": "$NVMF_PORT", 00:24:03.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.853 "hdgst": ${hdgst:-false}, 00:24:03.853 "ddgst": ${ddgst:-false} 00:24:03.854 }, 00:24:03.854 "method": "bdev_nvme_attach_controller" 00:24:03.854 } 00:24:03.854 EOF 00:24:03.854 )") 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.854 { 00:24:03.854 "params": { 00:24:03.854 "name": "Nvme$subsystem", 00:24:03.854 "trtype": "$TEST_TRANSPORT", 00:24:03.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.854 "adrfam": "ipv4", 00:24:03.854 "trsvcid": "$NVMF_PORT", 00:24:03.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.854 "hdgst": ${hdgst:-false}, 00:24:03.854 "ddgst": ${ddgst:-false} 00:24:03.854 }, 00:24:03.854 "method": "bdev_nvme_attach_controller" 00:24:03.854 } 00:24:03.854 EOF 00:24:03.854 )") 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.854 { 00:24:03.854 "params": { 00:24:03.854 "name": "Nvme$subsystem", 00:24:03.854 "trtype": "$TEST_TRANSPORT", 00:24:03.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.854 "adrfam": "ipv4", 00:24:03.854 "trsvcid": "$NVMF_PORT", 00:24:03.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.854 "hdgst": ${hdgst:-false}, 00:24:03.854 "ddgst": ${ddgst:-false} 00:24:03.854 }, 00:24:03.854 "method": "bdev_nvme_attach_controller" 00:24:03.854 } 00:24:03.854 EOF 00:24:03.854 )") 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.854 { 00:24:03.854 "params": { 00:24:03.854 "name": "Nvme$subsystem", 00:24:03.854 "trtype": "$TEST_TRANSPORT", 00:24:03.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.854 "adrfam": "ipv4", 00:24:03.854 "trsvcid": "$NVMF_PORT", 00:24:03.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.854 "hdgst": ${hdgst:-false}, 00:24:03.854 "ddgst": ${ddgst:-false} 00:24:03.854 }, 00:24:03.854 "method": "bdev_nvme_attach_controller" 00:24:03.854 } 00:24:03.854 EOF 00:24:03.854 )") 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:03.854 [2024-11-19 10:51:53.547529] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:03.854 [2024-11-19 10:51:53.547577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3988533 ] 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.854 { 00:24:03.854 "params": { 00:24:03.854 "name": "Nvme$subsystem", 00:24:03.854 "trtype": "$TEST_TRANSPORT", 00:24:03.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.854 "adrfam": "ipv4", 00:24:03.854 "trsvcid": "$NVMF_PORT", 00:24:03.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.854 "hdgst": ${hdgst:-false}, 00:24:03.854 "ddgst": ${ddgst:-false} 00:24:03.854 }, 00:24:03.854 "method": "bdev_nvme_attach_controller" 00:24:03.854 } 00:24:03.854 EOF 00:24:03.854 )") 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.854 { 00:24:03.854 "params": { 00:24:03.854 "name": "Nvme$subsystem", 00:24:03.854 "trtype": "$TEST_TRANSPORT", 00:24:03.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.854 "adrfam": "ipv4", 00:24:03.854 "trsvcid": "$NVMF_PORT", 00:24:03.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.854 "hdgst": ${hdgst:-false}, 00:24:03.854 "ddgst": ${ddgst:-false} 00:24:03.854 }, 00:24:03.854 "method": "bdev_nvme_attach_controller" 00:24:03.854 } 00:24:03.854 EOF 00:24:03.854 )") 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.854 { 00:24:03.854 "params": { 00:24:03.854 "name": "Nvme$subsystem", 00:24:03.854 "trtype": "$TEST_TRANSPORT", 00:24:03.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.854 "adrfam": "ipv4", 00:24:03.854 "trsvcid": "$NVMF_PORT", 00:24:03.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.854 "hdgst": ${hdgst:-false}, 00:24:03.854 "ddgst": ${ddgst:-false} 00:24:03.854 }, 00:24:03.854 "method": "bdev_nvme_attach_controller" 00:24:03.854 } 00:24:03.854 EOF 00:24:03.854 )") 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:24:03.854 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:03.854 "params": { 00:24:03.854 "name": "Nvme1", 00:24:03.854 "trtype": "tcp", 00:24:03.854 "traddr": "10.0.0.2", 00:24:03.854 "adrfam": "ipv4", 00:24:03.854 "trsvcid": "4420", 00:24:03.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.854 "hdgst": false, 00:24:03.854 "ddgst": false 00:24:03.854 }, 00:24:03.854 "method": "bdev_nvme_attach_controller" 00:24:03.854 },{ 00:24:03.854 "params": { 00:24:03.854 "name": "Nvme2", 00:24:03.854 "trtype": "tcp", 00:24:03.854 "traddr": "10.0.0.2", 00:24:03.854 "adrfam": "ipv4", 00:24:03.854 "trsvcid": "4420", 00:24:03.854 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:03.854 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:03.855 "hdgst": false, 00:24:03.855 "ddgst": false 00:24:03.855 }, 00:24:03.855 "method": "bdev_nvme_attach_controller" 00:24:03.855 },{ 00:24:03.855 "params": { 00:24:03.855 "name": "Nvme3", 00:24:03.855 "trtype": "tcp", 00:24:03.855 "traddr": "10.0.0.2", 00:24:03.855 "adrfam": "ipv4", 00:24:03.855 "trsvcid": "4420", 00:24:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:03.855 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:03.855 "hdgst": false, 00:24:03.855 "ddgst": false 00:24:03.855 }, 00:24:03.855 "method": "bdev_nvme_attach_controller" 00:24:03.855 },{ 00:24:03.855 "params": { 00:24:03.855 "name": "Nvme4", 00:24:03.855 "trtype": "tcp", 00:24:03.855 "traddr": "10.0.0.2", 00:24:03.855 "adrfam": "ipv4", 00:24:03.855 "trsvcid": "4420", 00:24:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:03.855 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:03.855 "hdgst": false, 00:24:03.855 "ddgst": false 00:24:03.855 }, 00:24:03.855 "method": "bdev_nvme_attach_controller" 00:24:03.855 },{ 00:24:03.855 "params": { 00:24:03.855 "name": "Nvme5", 00:24:03.855 "trtype": "tcp", 00:24:03.855 "traddr": "10.0.0.2", 00:24:03.855 "adrfam": "ipv4", 00:24:03.855 "trsvcid": "4420", 00:24:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:03.855 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:03.855 "hdgst": false, 00:24:03.855 "ddgst": false 00:24:03.855 }, 00:24:03.855 "method": "bdev_nvme_attach_controller" 00:24:03.855 },{ 00:24:03.855 "params": { 00:24:03.855 "name": "Nvme6", 00:24:03.855 "trtype": "tcp", 00:24:03.855 "traddr": "10.0.0.2", 00:24:03.855 "adrfam": "ipv4", 00:24:03.855 "trsvcid": "4420", 00:24:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:03.855 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:03.855 "hdgst": false, 00:24:03.855 "ddgst": false 00:24:03.855 }, 00:24:03.855 "method": "bdev_nvme_attach_controller" 00:24:03.855 },{ 00:24:03.855 "params": { 00:24:03.855 "name": "Nvme7", 00:24:03.855 "trtype": "tcp", 00:24:03.855 "traddr": "10.0.0.2", 00:24:03.855 "adrfam": "ipv4", 00:24:03.855 "trsvcid": "4420", 00:24:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:03.855 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:03.855 "hdgst": false, 00:24:03.855 "ddgst": false 00:24:03.855 }, 00:24:03.855 "method": "bdev_nvme_attach_controller" 00:24:03.855 },{ 00:24:03.855 "params": { 00:24:03.855 "name": "Nvme8", 00:24:03.855 "trtype": "tcp", 00:24:03.855 "traddr": "10.0.0.2", 00:24:03.855 "adrfam": "ipv4", 00:24:03.855 "trsvcid": "4420", 00:24:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:03.855 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:03.855 "hdgst": false, 00:24:03.855 "ddgst": false 00:24:03.855 }, 00:24:03.855 "method": "bdev_nvme_attach_controller" 00:24:03.855 },{ 00:24:03.855 "params": { 00:24:03.855 "name": "Nvme9", 00:24:03.855 "trtype": "tcp", 00:24:03.855 "traddr": "10.0.0.2", 00:24:03.855 "adrfam": "ipv4", 00:24:03.855 "trsvcid": "4420", 00:24:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:03.855 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:03.855 "hdgst": false, 00:24:03.855 "ddgst": false 00:24:03.855 }, 00:24:03.855 "method": "bdev_nvme_attach_controller" 00:24:03.855 },{ 00:24:03.855 "params": { 00:24:03.855 "name": "Nvme10", 00:24:03.855 "trtype": "tcp", 00:24:03.855 "traddr": "10.0.0.2", 00:24:03.855 "adrfam": "ipv4", 00:24:03.855 "trsvcid": "4420", 00:24:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:03.855 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:03.855 "hdgst": false, 00:24:03.855 "ddgst": false 00:24:03.855 }, 00:24:03.855 "method": "bdev_nvme_attach_controller" 00:24:03.855 }' 00:24:03.855 [2024-11-19 10:51:53.623797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.113 [2024-11-19 10:51:53.665629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.013 Running I/O for 10 seconds... 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:06.013 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:06.272 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:06.272 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:06.272 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:06.272 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.272 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:06.272 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:06.272 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.272 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:06.272 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:06.272 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3988533 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3988533 ']' 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3988533 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.530 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3988533 00:24:06.789 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:06.789 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:06.789 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3988533' 00:24:06.789 killing process with pid 3988533 00:24:06.789 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3988533 00:24:06.789 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3988533 00:24:06.789 Received shutdown signal, test time was about 0.894522 seconds 00:24:06.789 00:24:06.789 Latency(us) 00:24:06.789 [2024-11-19T09:51:56.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.789 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.789 Verification LBA range: start 0x0 length 0x400 00:24:06.789 Nvme1n1 : 0.86 223.03 13.94 0.00 0.00 283800.79 18225.25 229688.08 00:24:06.789 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.789 Verification LBA range: start 0x0 length 0x400 00:24:06.789 Nvme2n1 : 0.88 317.00 19.81 0.00 0.00 193574.90 6491.18 200727.41 00:24:06.789 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.789 Verification LBA range: start 0x0 length 0x400 00:24:06.789 Nvme3n1 : 0.87 294.88 18.43 0.00 0.00 206839.71 24217.11 200727.41 00:24:06.789 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.789 Verification LBA range: start 0x0 length 0x400 00:24:06.789 Nvme4n1 : 0.87 293.84 18.36 0.00 0.00 203735.89 15478.98 193736.90 00:24:06.789 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.789 Verification LBA range: start 0x0 length 0x400 00:24:06.789 Nvme5n1 : 0.88 290.19 18.14 0.00 0.00 202477.96 14542.75 216705.71 00:24:06.789 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.789 Verification LBA range: start 0x0 length 0x400 00:24:06.789 Nvme6n1 : 0.88 292.43 18.28 0.00 0.00 197043.93 20846.69 213709.78 00:24:06.789 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.789 Verification LBA range: start 0x0 length 0x400 00:24:06.789 Nvme7n1 : 0.89 287.48 17.97 0.00 0.00 196460.62 13169.62 219701.64 00:24:06.789 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.789 Verification LBA range: start 0x0 length 0x400 00:24:06.789 Nvme8n1 : 0.89 287.76 17.99 0.00 0.00 192873.57 25215.76 203723.34 00:24:06.789 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.789 Verification LBA range: start 0x0 length 0x400 00:24:06.789 Nvme9n1 : 0.89 286.41 17.90 0.00 0.00 190074.15 16976.94 214708.42 00:24:06.789 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.789 Verification LBA range: start 0x0 length 0x400 00:24:06.789 Nvme10n1 : 0.86 223.58 13.97 0.00 0.00 236692.24 17725.93 235679.94 00:24:06.789 [2024-11-19T09:51:56.581Z] =================================================================================================================== 00:24:06.789 [2024-11-19T09:51:56.581Z] Total : 2796.60 174.79 0.00 0.00 207593.29 6491.18 235679.94 00:24:06.789 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3988469 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:08.165 rmmod nvme_tcp 00:24:08.165 rmmod nvme_fabrics 00:24:08.165 rmmod nvme_keyring 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3988469 ']' 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3988469 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3988469 ']' 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3988469 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3988469 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3988469' 00:24:08.165 killing process with pid 3988469 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3988469 00:24:08.165 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3988469 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.425 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.962 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:10.963 00:24:10.963 real 0m7.812s 00:24:10.963 user 0m23.814s 00:24:10.963 sys 0m1.384s 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:10.963 ************************************ 00:24:10.963 END TEST nvmf_shutdown_tc2 00:24:10.963 ************************************ 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:10.963 ************************************ 00:24:10.963 START TEST nvmf_shutdown_tc3 00:24:10.963 ************************************ 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:10.963 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:10.963 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:10.963 Found net devices under 0000:86:00.0: cvl_0_0 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.963 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:10.964 Found net devices under 0000:86:00.1: cvl_0_1 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:24:10.964 00:24:10.964 --- 10.0.0.2 ping statistics --- 00:24:10.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.964 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:24:10.964 00:24:10.964 --- 10.0.0.1 ping statistics --- 00:24:10.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.964 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3989809 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3989809 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3989809 ']' 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.964 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:10.964 [2024-11-19 10:52:00.583742] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:10.964 [2024-11-19 10:52:00.583784] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.964 [2024-11-19 10:52:00.661730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.964 [2024-11-19 10:52:00.705772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.964 [2024-11-19 10:52:00.705804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.964 [2024-11-19 10:52:00.705812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.964 [2024-11-19 10:52:00.705819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.964 [2024-11-19 10:52:00.705824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.964 [2024-11-19 10:52:00.707468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.964 [2024-11-19 10:52:00.707574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.964 [2024-11-19 10:52:00.707703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.964 [2024-11-19 10:52:00.707704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.899 [2024-11-19 10:52:01.449403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.899 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.899 Malloc1 00:24:11.899 [2024-11-19 10:52:01.560870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.899 Malloc2 00:24:11.899 Malloc3 00:24:11.899 Malloc4 00:24:12.157 Malloc5 00:24:12.157 Malloc6 00:24:12.157 Malloc7 00:24:12.157 Malloc8 00:24:12.157 Malloc9 00:24:12.157 Malloc10 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3990086 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3990086 /var/tmp/bdevperf.sock 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3990086 ']' 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:12.417 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:12.417 { 00:24:12.417 "params": { 00:24:12.417 "name": "Nvme$subsystem", 00:24:12.417 "trtype": "$TEST_TRANSPORT", 00:24:12.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.418 "adrfam": "ipv4", 00:24:12.418 "trsvcid": "$NVMF_PORT", 00:24:12.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.418 "hdgst": ${hdgst:-false}, 00:24:12.418 "ddgst": ${ddgst:-false} 00:24:12.418 }, 00:24:12.418 "method": "bdev_nvme_attach_controller" 00:24:12.418 } 00:24:12.418 EOF 00:24:12.418 )") 00:24:12.418 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:12.418 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:12.418 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:12.418 { 00:24:12.418 "params": { 00:24:12.418 "name": "Nvme$subsystem", 00:24:12.418 "trtype": "$TEST_TRANSPORT", 00:24:12.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.418 "adrfam": "ipv4", 00:24:12.418 "trsvcid": "$NVMF_PORT", 00:24:12.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.418 "hdgst": ${hdgst:-false}, 00:24:12.418 "ddgst": ${ddgst:-false} 00:24:12.418 }, 00:24:12.418 "method": "bdev_nvme_attach_controller" 00:24:12.418 } 00:24:12.418 EOF 00:24:12.418 )") 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:12.418 { 00:24:12.418 "params": { 00:24:12.418 "name": "Nvme$subsystem", 00:24:12.418 "trtype": "$TEST_TRANSPORT", 00:24:12.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.418 "adrfam": "ipv4", 00:24:12.418 "trsvcid": "$NVMF_PORT", 00:24:12.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.418 "hdgst": ${hdgst:-false}, 00:24:12.418 "ddgst": ${ddgst:-false} 00:24:12.418 }, 00:24:12.418 "method": "bdev_nvme_attach_controller" 00:24:12.418 } 00:24:12.418 EOF 00:24:12.418 )") 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:12.418 { 00:24:12.418 "params": { 00:24:12.418 "name": "Nvme$subsystem", 00:24:12.418 "trtype": "$TEST_TRANSPORT", 00:24:12.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.418 "adrfam": "ipv4", 00:24:12.418 "trsvcid": "$NVMF_PORT", 00:24:12.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.418 "hdgst": ${hdgst:-false}, 00:24:12.418 "ddgst": ${ddgst:-false} 00:24:12.418 }, 00:24:12.418 "method": "bdev_nvme_attach_controller" 00:24:12.418 } 00:24:12.418 EOF 00:24:12.418 )") 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:12.418 { 00:24:12.418 "params": { 00:24:12.418 "name": "Nvme$subsystem", 00:24:12.418 "trtype": "$TEST_TRANSPORT", 00:24:12.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.418 "adrfam": "ipv4", 00:24:12.418 "trsvcid": "$NVMF_PORT", 00:24:12.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.418 "hdgst": ${hdgst:-false}, 00:24:12.418 "ddgst": ${ddgst:-false} 00:24:12.418 }, 00:24:12.418 "method": "bdev_nvme_attach_controller" 00:24:12.418 } 00:24:12.418 EOF 00:24:12.418 )") 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:12.418 { 00:24:12.418 "params": { 00:24:12.418 "name": "Nvme$subsystem", 00:24:12.418 "trtype": "$TEST_TRANSPORT", 00:24:12.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.418 "adrfam": "ipv4", 00:24:12.418 "trsvcid": "$NVMF_PORT", 00:24:12.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.418 "hdgst": ${hdgst:-false}, 00:24:12.418 "ddgst": ${ddgst:-false} 00:24:12.418 }, 00:24:12.418 "method": "bdev_nvme_attach_controller" 00:24:12.418 } 00:24:12.418 EOF 00:24:12.418 )") 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:12.418 { 00:24:12.418 "params": { 00:24:12.418 "name": "Nvme$subsystem", 00:24:12.418 "trtype": "$TEST_TRANSPORT", 00:24:12.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.418 "adrfam": "ipv4", 00:24:12.418 "trsvcid": "$NVMF_PORT", 00:24:12.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.418 "hdgst": ${hdgst:-false}, 00:24:12.418 "ddgst": ${ddgst:-false} 00:24:12.418 }, 00:24:12.418 "method": "bdev_nvme_attach_controller" 00:24:12.418 } 00:24:12.418 EOF 00:24:12.418 )") 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:12.418 [2024-11-19 10:52:02.037986] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:12.418 [2024-11-19 10:52:02.038035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3990086 ] 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:12.418 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:12.418 { 00:24:12.418 "params": { 00:24:12.418 "name": "Nvme$subsystem", 00:24:12.418 "trtype": "$TEST_TRANSPORT", 00:24:12.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "$NVMF_PORT", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.419 "hdgst": ${hdgst:-false}, 00:24:12.419 "ddgst": ${ddgst:-false} 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 } 00:24:12.419 EOF 00:24:12.419 )") 00:24:12.419 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:12.419 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:12.419 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:12.419 { 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme$subsystem", 00:24:12.419 "trtype": "$TEST_TRANSPORT", 00:24:12.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "$NVMF_PORT", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.419 "hdgst": ${hdgst:-false}, 00:24:12.419 "ddgst": ${ddgst:-false} 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 } 00:24:12.419 EOF 00:24:12.419 )") 00:24:12.419 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:12.419 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:12.419 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:12.419 { 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme$subsystem", 00:24:12.419 "trtype": "$TEST_TRANSPORT", 00:24:12.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "$NVMF_PORT", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.419 "hdgst": ${hdgst:-false}, 00:24:12.419 "ddgst": ${ddgst:-false} 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 } 00:24:12.419 EOF 00:24:12.419 )") 00:24:12.419 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:12.419 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:24:12.419 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:24:12.419 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme1", 00:24:12.419 "trtype": "tcp", 00:24:12.419 "traddr": "10.0.0.2", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "4420", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.419 "hdgst": false, 00:24:12.419 "ddgst": false 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 },{ 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme2", 00:24:12.419 "trtype": "tcp", 00:24:12.419 "traddr": "10.0.0.2", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "4420", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:12.419 "hdgst": false, 00:24:12.419 "ddgst": false 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 },{ 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme3", 00:24:12.419 "trtype": "tcp", 00:24:12.419 "traddr": "10.0.0.2", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "4420", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:12.419 "hdgst": false, 00:24:12.419 "ddgst": false 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 },{ 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme4", 00:24:12.419 "trtype": "tcp", 00:24:12.419 "traddr": "10.0.0.2", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "4420", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:12.419 "hdgst": false, 00:24:12.419 "ddgst": false 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 },{ 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme5", 00:24:12.419 "trtype": "tcp", 00:24:12.419 "traddr": "10.0.0.2", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "4420", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:12.419 "hdgst": false, 00:24:12.419 "ddgst": false 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 },{ 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme6", 00:24:12.419 "trtype": "tcp", 00:24:12.419 "traddr": "10.0.0.2", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "4420", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:12.419 "hdgst": false, 00:24:12.419 "ddgst": false 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 },{ 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme7", 00:24:12.419 "trtype": "tcp", 00:24:12.419 "traddr": "10.0.0.2", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "4420", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:12.419 "hdgst": false, 00:24:12.419 "ddgst": false 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 },{ 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme8", 00:24:12.419 "trtype": "tcp", 00:24:12.419 "traddr": "10.0.0.2", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "4420", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:12.419 "hdgst": false, 00:24:12.419 "ddgst": false 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 },{ 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme9", 00:24:12.419 "trtype": "tcp", 00:24:12.419 "traddr": "10.0.0.2", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "4420", 00:24:12.419 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:12.419 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:12.419 "hdgst": false, 00:24:12.419 "ddgst": false 00:24:12.419 }, 00:24:12.419 "method": "bdev_nvme_attach_controller" 00:24:12.419 },{ 00:24:12.419 "params": { 00:24:12.419 "name": "Nvme10", 00:24:12.419 "trtype": "tcp", 00:24:12.419 "traddr": "10.0.0.2", 00:24:12.419 "adrfam": "ipv4", 00:24:12.419 "trsvcid": "4420", 00:24:12.420 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:12.420 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:12.420 "hdgst": false, 00:24:12.420 "ddgst": false 00:24:12.420 }, 00:24:12.420 "method": "bdev_nvme_attach_controller" 00:24:12.420 }' 00:24:12.420 [2024-11-19 10:52:02.117327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.420 [2024-11-19 10:52:02.158390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.320 Running I/O for 10 seconds... 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:14.320 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3989809 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3989809 ']' 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3989809 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3989809 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3989809' 00:24:14.586 killing process with pid 3989809 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3989809 00:24:14.586 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3989809 00:24:14.586 [2024-11-19 10:52:04.343367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.586 [2024-11-19 10:52:04.343788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.587 [2024-11-19 10:52:04.343794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.587 [2024-11-19 10:52:04.343800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.587 [2024-11-19 10:52:04.343806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.587 [2024-11-19 10:52:04.343812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.587 [2024-11-19 10:52:04.343818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.587 [2024-11-19 10:52:04.343824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.587 [2024-11-19 10:52:04.343830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.587 [2024-11-19 10:52:04.343836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.587 [2024-11-19 10:52:04.343843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70760 is same with the state(6) to be set 00:24:14.587 [2024-11-19 10:52:04.344224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.587 [2024-11-19 10:52:04.344255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.587 [2024-11-19 10:52:04.344272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.587 [2024-11-19 10:52:04.344286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.587 [2024-11-19 10:52:04.344300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b1b0 is same with the state(6) to be set 00:24:14.587 [2024-11-19 10:52:04.344371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.587 [2024-11-19 10:52:04.344814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.587 [2024-11-19 10:52:04.344820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.344987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.344993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-11-19 10:52:04.345267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.345277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-11-19 10:52:04.345292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.345301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.588 [2024-11-19 10:52:04.345324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.588 [2024-11-19 10:52:04.345328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.588 [2024-11-19 10:52:04.345331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.589 [2024-11-19 10:52:04.345339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.589 [2024-11-19 10:52:04.345353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.589 [2024-11-19 10:52:04.345360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.345637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8bf0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f90c0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:14.589 [2024-11-19 10:52:04.347702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6b1b0 (9): Bad file descriptor 00:24:14.589 [2024-11-19 10:52:04.347706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1[2024-11-19 10:52:04.347788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.589 the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.589 [2024-11-19 10:52:04.347808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-11-19 10:52:04.347815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.589 the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with [2024-11-19 10:52:04.347828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:14.589 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.589 [2024-11-19 10:52:04.347837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.589 [2024-11-19 10:52:04.347845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.589 [2024-11-19 10:52:04.347852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1[2024-11-19 10:52:04.347859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.589 the state(6) to be set 00:24:14.589 [2024-11-19 10:52:04.347867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.347868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.589 the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 [2024-11-19 10:52:04.347884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.347891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with [2024-11-19 10:52:04.347898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1the state(6) to be set 00:24:14.590 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 [2024-11-19 10:52:04.347906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.347913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 [2024-11-19 10:52:04.347920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.347927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 [2024-11-19 10:52:04.347937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.347944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-11-19 10:52:04.347951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.347960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 [2024-11-19 10:52:04.347976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.347984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 [2024-11-19 10:52:04.347991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.347996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.347998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with [2024-11-19 10:52:04.348005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128the state(6) to be set 00:24:14.590 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 [2024-11-19 10:52:04.348013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.348020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 [2024-11-19 10:52:04.348027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.348034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128[2024-11-19 10:52:04.348041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.348052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 [2024-11-19 10:52:04.348068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.348075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 [2024-11-19 10:52:04.348082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.348089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128[2024-11-19 10:52:04.348096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with [2024-11-19 10:52:04.348105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:14.590 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.348115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 [2024-11-19 10:52:04.348121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 [2024-11-19 10:52:04.348128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-11-19 10:52:04.348135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.590 the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.348144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.590 the state(6) to be set 00:24:14.590 [2024-11-19 10:52:04.348154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with [2024-11-19 10:52:04.348155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12the state(6) to be set 00:24:14.590 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with [2024-11-19 10:52:04.348164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:14.591 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.591 [2024-11-19 10:52:04.348175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.591 [2024-11-19 10:52:04.348182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.591 [2024-11-19 10:52:04.348193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:12[2024-11-19 10:52:04.348194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 the state(6) to be set 00:24:14.591 [2024-11-19 10:52:04.348207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f95b0 is same with the state(6) to be set 00:24:14.591 [2024-11-19 10:52:04.348209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.591 [2024-11-19 10:52:04.348704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.591 [2024-11-19 10:52:04.348710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.592 [2024-11-19 10:52:04.348718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.592 [2024-11-19 10:52:04.348724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.592 [2024-11-19 10:52:04.348732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.592 [2024-11-19 10:52:04.348739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.592 [2024-11-19 10:52:04.348746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.592 [2024-11-19 10:52:04.348753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.592 [2024-11-19 10:52:04.348762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.592 [2024-11-19 10:52:04.348768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.592 [2024-11-19 10:52:04.348777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.592 [2024-11-19 10:52:04.348783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.592 [2024-11-19 10:52:04.348791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.592 [2024-11-19 10:52:04.348798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.592 [2024-11-19 10:52:04.348806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.592 [2024-11-19 10:52:04.348812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.592 [2024-11-19 10:52:04.348820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.592 [2024-11-19 10:52:04.348826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.592 [2024-11-19 10:52:04.348867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.348999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.349283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9930 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.350956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.350973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.350981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.592 [2024-11-19 10:52:04.350987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.350994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with [2024-11-19 10:52:04.351043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12the state(6) to be set 00:24:14.593 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.351066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.351095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 [2024-11-19 10:52:04.351118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.351132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 [2024-11-19 10:52:04.351154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:12[2024-11-19 10:52:04.351161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with [2024-11-19 10:52:04.351170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:14.593 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 [2024-11-19 10:52:04.351179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 [2024-11-19 10:52:04.351193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 [2024-11-19 10:52:04.351219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with [2024-11-19 10:52:04.351226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:12the state(6) to be set 00:24:14.593 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with [2024-11-19 10:52:04.351236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:14.593 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 [2024-11-19 10:52:04.351243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 [2024-11-19 10:52:04.351257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1[2024-11-19 10:52:04.351264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.351273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 [2024-11-19 10:52:04.351295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 [2024-11-19 10:52:04.351316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 [2024-11-19 10:52:04.351327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 [2024-11-19 10:52:04.351334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1[2024-11-19 10:52:04.351341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.593 the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.351349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.593 the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.593 [2024-11-19 10:52:04.351360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.594 [2024-11-19 10:52:04.351365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.351367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 [2024-11-19 10:52:04.351372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.351377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.594 [2024-11-19 10:52:04.351379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.351385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:52:04.351386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.351394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.351395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.594 [2024-11-19 10:52:04.351400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.351403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 [2024-11-19 10:52:04.351408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.351412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170390 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.351415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.351422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa2d0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.351657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:14.594 [2024-11-19 10:52:04.351703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d68c70 (9): Bad file descriptor 00:24:14.594 [2024-11-19 10:52:04.351879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.594 [2024-11-19 10:52:04.351892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6b1b0 with addr=10.0.0.2, port=4420 00:24:14.594 [2024-11-19 10:52:04.351901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b1b0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.352943] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:14.594 [2024-11-19 10:52:04.352966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:14.594 [2024-11-19 10:52:04.353001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a93e0 (9): Bad file descriptor 00:24:14.594 [2024-11-19 10:52:04.353019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6b1b0 (9): Bad file descriptor 00:24:14.594 [2024-11-19 10:52:04.353072] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:14.594 [2024-11-19 10:52:04.353625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.594 [2024-11-19 10:52:04.353645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d68c70 with addr=10.0.0.2, port=4420 00:24:14.594 [2024-11-19 10:52:04.353653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68c70 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.353670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:14.594 [2024-11-19 10:52:04.353677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:14.594 [2024-11-19 10:52:04.353685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:14.594 [2024-11-19 10:52:04.353694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:14.594 [2024-11-19 10:52:04.353755] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:14.594 [2024-11-19 10:52:04.354307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.594 [2024-11-19 10:52:04.354325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a93e0 with addr=10.0.0.2, port=4420 00:24:14.594 [2024-11-19 10:52:04.354333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a93e0 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.354342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d68c70 (9): Bad file descriptor 00:24:14.594 [2024-11-19 10:52:04.354376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.594 [2024-11-19 10:52:04.354385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 [2024-11-19 10:52:04.354393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.594 [2024-11-19 10:52:04.354399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 [2024-11-19 10:52:04.354406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.594 [2024-11-19 10:52:04.354413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 [2024-11-19 10:52:04.354420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.594 [2024-11-19 10:52:04.354427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 [2024-11-19 10:52:04.354433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b0e70 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.354463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.594 [2024-11-19 10:52:04.354471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 [2024-11-19 10:52:04.354479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.594 [2024-11-19 10:52:04.354485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 [2024-11-19 10:52:04.354492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.594 [2024-11-19 10:52:04.354498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 [2024-11-19 10:52:04.354505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.594 [2024-11-19 10:52:04.354511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 [2024-11-19 10:52:04.354517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7f610 is same with the state(6) to be set 00:24:14.594 [2024-11-19 10:52:04.354548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.594 [2024-11-19 10:52:04.354556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.594 [2024-11-19 10:52:04.354563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b0320 is same with the state(6) to be set 00:24:14.595 [2024-11-19 10:52:04.354624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c590 is same with the state(6) to be set 00:24:14.595 [2024-11-19 10:52:04.354705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21957a0 is same with the state(6) to be set 00:24:14.595 [2024-11-19 10:52:04.354782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.595 [2024-11-19 10:52:04.354830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.595 [2024-11-19 10:52:04.354835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ad50 is same with the state(6) to be set 00:24:14.595 [2024-11-19 10:52:04.354916] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:14.595 [2024-11-19 10:52:04.355082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a93e0 (9): Bad file descriptor 00:24:14.595 [2024-11-19 10:52:04.355095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:14.595 [2024-11-19 10:52:04.355102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:14.595 [2024-11-19 10:52:04.355108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:14.595 [2024-11-19 10:52:04.355116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:14.595 [2024-11-19 10:52:04.355181] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:14.595 [2024-11-19 10:52:04.355287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:14.595 [2024-11-19 10:52:04.355300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:14.595 [2024-11-19 10:52:04.355307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:14.595 [2024-11-19 10:52:04.355313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:14.595 [2024-11-19 10:52:04.359771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:14.595 [2024-11-19 10:52:04.360004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.595 [2024-11-19 10:52:04.360021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6b1b0 with addr=10.0.0.2, port=4420 00:24:14.595 [2024-11-19 10:52:04.360029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b1b0 is same with the state(6) to be set 00:24:14.595 [2024-11-19 10:52:04.360104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6b1b0 (9): Bad file descriptor 00:24:14.595 [2024-11-19 10:52:04.360599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:14.595 [2024-11-19 10:52:04.360608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:14.595 [2024-11-19 10:52:04.360615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:14.595 [2024-11-19 10:52:04.360623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:14.595 [2024-11-19 10:52:04.363115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:14.595 [2024-11-19 10:52:04.363328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.595 [2024-11-19 10:52:04.363343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d68c70 with addr=10.0.0.2, port=4420 00:24:14.595 [2024-11-19 10:52:04.363350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68c70 is same with the state(6) to be set 00:24:14.595 [2024-11-19 10:52:04.363384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d68c70 (9): Bad file descriptor 00:24:14.595 [2024-11-19 10:52:04.363415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:14.595 [2024-11-19 10:52:04.363422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:14.595 [2024-11-19 10:52:04.363430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:14.595 [2024-11-19 10:52:04.363437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:14.595 [2024-11-19 10:52:04.363759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:14.595 [2024-11-19 10:52:04.364015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.595 [2024-11-19 10:52:04.364027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a93e0 with addr=10.0.0.2, port=4420 00:24:14.595 [2024-11-19 10:52:04.364033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a93e0 is same with the state(6) to be set 00:24:14.595 [2024-11-19 10:52:04.364066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a93e0 (9): Bad file descriptor 00:24:14.595 [2024-11-19 10:52:04.364096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:14.595 [2024-11-19 10:52:04.364103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:14.595 [2024-11-19 10:52:04.364110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:14.595 [2024-11-19 10:52:04.364119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:14.595 [2024-11-19 10:52:04.364195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b0e70 (9): Bad file descriptor 00:24:14.595 [2024-11-19 10:52:04.364216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7f610 (9): Bad file descriptor 00:24:14.595 [2024-11-19 10:52:04.364238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b0320 (9): Bad file descriptor 00:24:14.596 [2024-11-19 10:52:04.364253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c590 (9): Bad file descriptor 00:24:14.596 [2024-11-19 10:52:04.364267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21957a0 (9): Bad file descriptor 00:24:14.596 [2024-11-19 10:52:04.364280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6ad50 (9): Bad file descriptor 00:24:14.596 [2024-11-19 10:52:04.365732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.365947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fa7c0 is same with the state(6) to be set 00:24:14.596 [2024-11-19 10:52:04.366019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.596 [2024-11-19 10:52:04.366320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.596 [2024-11-19 10:52:04.366328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.597 [2024-11-19 10:52:04.366901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.597 [2024-11-19 10:52:04.366908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.598 [2024-11-19 10:52:04.366925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.598 [2024-11-19 10:52:04.366931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.598 [2024-11-19 10:52:04.366940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.598 [2024-11-19 10:52:04.366946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.598 [2024-11-19 10:52:04.366954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.598 [2024-11-19 10:52:04.366960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.598 [2024-11-19 10:52:04.366969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.598 [2024-11-19 10:52:04.366976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.598 [2024-11-19 10:52:04.366983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2172d80 is same with the state(6) to be set 00:24:14.598 [2024-11-19 10:52:04.367961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:14.598 [2024-11-19 10:52:04.368229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.598 [2024-11-19 10:52:04.368244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b0320 with addr=10.0.0.2, port=4420 00:24:14.598 [2024-11-19 10:52:04.368252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b0320 is same with the state(6) to be set 00:24:14.598 [2024-11-19 10:52:04.368502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b0320 (9): Bad file descriptor 00:24:14.598 [2024-11-19 10:52:04.368535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:14.862 [2024-11-19 10:52:04.368544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:14.862 [2024-11-19 10:52:04.368555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:14.862 [2024-11-19 10:52:04.368562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:14.862 [2024-11-19 10:52:04.369884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:14.862 [2024-11-19 10:52:04.370155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.862 [2024-11-19 10:52:04.370167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6b1b0 with addr=10.0.0.2, port=4420 00:24:14.862 [2024-11-19 10:52:04.370174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b1b0 is same with the state(6) to be set 00:24:14.862 [2024-11-19 10:52:04.370207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6b1b0 (9): Bad file descriptor 00:24:14.862 [2024-11-19 10:52:04.370234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:14.862 [2024-11-19 10:52:04.370241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:14.862 [2024-11-19 10:52:04.370248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:14.862 [2024-11-19 10:52:04.370254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:14.862 [2024-11-19 10:52:04.373213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:14.862 [2024-11-19 10:52:04.373392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.863 [2024-11-19 10:52:04.373404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d68c70 with addr=10.0.0.2, port=4420 00:24:14.863 [2024-11-19 10:52:04.373411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68c70 is same with the state(6) to be set 00:24:14.863 [2024-11-19 10:52:04.373440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d68c70 (9): Bad file descriptor 00:24:14.863 [2024-11-19 10:52:04.373468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:14.863 [2024-11-19 10:52:04.373475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:14.863 [2024-11-19 10:52:04.373481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:14.863 [2024-11-19 10:52:04.373487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:14.863 [2024-11-19 10:52:04.373848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:14.863 [2024-11-19 10:52:04.374023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.863 [2024-11-19 10:52:04.374034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a93e0 with addr=10.0.0.2, port=4420 00:24:14.863 [2024-11-19 10:52:04.374041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a93e0 is same with the state(6) to be set 00:24:14.863 [2024-11-19 10:52:04.374068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a93e0 (9): Bad file descriptor 00:24:14.863 [2024-11-19 10:52:04.374095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:14.863 [2024-11-19 10:52:04.374101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:14.863 [2024-11-19 10:52:04.374107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:14.863 [2024-11-19 10:52:04.374113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:14.863 [2024-11-19 10:52:04.374246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.863 [2024-11-19 10:52:04.374257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.863 [2024-11-19 10:52:04.374271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.863 [2024-11-19 10:52:04.374285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.863 [2024-11-19 10:52:04.374298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ae930 is same with the state(6) to be set 00:24:14.863 [2024-11-19 10:52:04.374399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.863 [2024-11-19 10:52:04.374789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.863 [2024-11-19 10:52:04.374795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.374991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.374998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.375344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.864 [2024-11-19 10:52:04.375351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f706a0 is same with the state(6) to be set 00:24:14.864 [2024-11-19 10:52:04.376331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.864 [2024-11-19 10:52:04.376343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.865 [2024-11-19 10:52:04.376918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.865 [2024-11-19 10:52:04.376924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.376933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.376939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.376947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.376954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.376962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.376969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.376977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.376984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.376993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.377285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.377293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216da10 is same with the state(6) to be set 00:24:14.866 [2024-11-19 10:52:04.378267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.378281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.378292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.378299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.378307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.378316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.378325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.378332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.378340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.378350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.378358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.378365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.378373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.378379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.378387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.378393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.378401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.866 [2024-11-19 10:52:04.378408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.866 [2024-11-19 10:52:04.378416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.867 [2024-11-19 10:52:04.378821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.867 [2024-11-19 10:52:04.378829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.378843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.378858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.378872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.378886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.378902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.378917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.378931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.378946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.378960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.378974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.378989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.378996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.379220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.379227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eed0 is same with the state(6) to be set 00:24:14.868 [2024-11-19 10:52:04.380199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.380218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.380230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.380237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.380246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.380253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.380264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.380271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.380279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-11-19 10:52:04.380286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.868 [2024-11-19 10:52:04.380294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.869 [2024-11-19 10:52:04.380737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.869 [2024-11-19 10:52:04.380744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.380989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.380996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.381013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.381027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.381043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.381058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.381072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.381087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.381103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.381119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.381133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.381148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.381162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.381169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2171850 is same with the state(6) to be set 00:24:14.870 [2024-11-19 10:52:04.382156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.382172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.870 [2024-11-19 10:52:04.382185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.870 [2024-11-19 10:52:04.382192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.871 [2024-11-19 10:52:04.382715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.871 [2024-11-19 10:52:04.382722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.382987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.382994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.383002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.383008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.383016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.383023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.383031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.383037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.383045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.383052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.383060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.383067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.383075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.383082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.383090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.383096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.383104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.872 [2024-11-19 10:52:04.383113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.872 [2024-11-19 10:52:04.383121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5f10 is same with the state(6) to be set 00:24:14.872 [2024-11-19 10:52:04.384070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:14.872 [2024-11-19 10:52:04.384087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:14.872 [2024-11-19 10:52:04.384095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:14.872 [2024-11-19 10:52:04.384104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:24:14.872 task offset: 24576 on job bdev=Nvme1n1 fails 00:24:14.872 00:24:14.872 Latency(us) 00:24:14.872 [2024-11-19T09:52:04.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.872 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.872 Job: Nvme1n1 ended in about 0.74 seconds with error 00:24:14.872 Verification LBA range: start 0x0 length 0x400 00:24:14.873 Nvme1n1 : 0.74 260.51 16.28 86.84 0.00 181925.97 3167.57 217704.35 00:24:14.873 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.873 Job: Nvme2n1 ended in about 0.77 seconds with error 00:24:14.873 Verification LBA range: start 0x0 length 0x400 00:24:14.873 Nvme2n1 : 0.77 167.07 10.44 83.53 0.00 247132.16 16227.96 218702.99 00:24:14.873 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.873 Job: Nvme3n1 ended in about 0.74 seconds with error 00:24:14.873 Verification LBA range: start 0x0 length 0x400 00:24:14.873 Nvme3n1 : 0.74 259.19 16.20 86.40 0.00 175056.73 2793.08 201726.05 00:24:14.873 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.873 Job: Nvme4n1 ended in about 0.77 seconds with error 00:24:14.873 Verification LBA range: start 0x0 length 0x400 00:24:14.873 Nvme4n1 : 0.77 249.97 15.62 83.32 0.00 178005.94 15728.64 201726.05 00:24:14.873 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.873 Job: Nvme5n1 ended in about 0.77 seconds with error 00:24:14.873 Verification LBA range: start 0x0 length 0x400 00:24:14.873 Nvme5n1 : 0.77 166.23 10.39 83.11 0.00 232866.38 16976.94 212711.13 00:24:14.873 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.873 Job: Nvme6n1 ended in about 0.74 seconds with error 00:24:14.873 Verification LBA range: start 0x0 length 0x400 00:24:14.873 Nvme6n1 : 0.74 258.50 16.16 24.23 0.00 199241.92 16227.96 213709.78 00:24:14.873 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.873 Job: Nvme7n1 ended in about 0.77 seconds with error 00:24:14.873 Verification LBA range: start 0x0 length 0x400 00:24:14.873 Nvme7n1 : 0.77 165.81 10.36 82.91 0.00 223126.67 27337.87 206719.27 00:24:14.873 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.873 Job: Nvme8n1 ended in about 0.76 seconds with error 00:24:14.873 Verification LBA range: start 0x0 length 0x400 00:24:14.873 Nvme8n1 : 0.76 168.89 10.56 84.44 0.00 213175.18 15042.07 210713.84 00:24:14.873 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.873 Verification LBA range: start 0x0 length 0x400 00:24:14.873 Nvme9n1 : 0.75 255.56 15.97 0.00 0.00 205806.45 22344.66 239674.51 00:24:14.873 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.873 Job: Nvme10n1 ended in about 0.77 seconds with error 00:24:14.873 Verification LBA range: start 0x0 length 0x400 00:24:14.873 Nvme10n1 : 0.77 165.39 10.34 82.70 0.00 208342.71 32455.92 219701.64 00:24:14.873 [2024-11-19T09:52:04.665Z] =================================================================================================================== 00:24:14.873 [2024-11-19T09:52:04.665Z] Total : 2117.12 132.32 697.49 0.00 203870.52 2793.08 239674.51 00:24:14.873 [2024-11-19 10:52:04.413822] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:14.873 [2024-11-19 10:52:04.413873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:14.873 [2024-11-19 10:52:04.414188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.873 [2024-11-19 10:52:04.414210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6ad50 with addr=10.0.0.2, port=4420 00:24:14.873 [2024-11-19 10:52:04.414222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ad50 is same with the state(6) to be set 00:24:14.873 [2024-11-19 10:52:04.414422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.873 [2024-11-19 10:52:04.414433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21957a0 with addr=10.0.0.2, port=4420 00:24:14.873 [2024-11-19 10:52:04.414441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21957a0 is same with the state(6) to be set 00:24:14.873 [2024-11-19 10:52:04.414659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.873 [2024-11-19 10:52:04.414669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c590 with addr=10.0.0.2, port=4420 00:24:14.873 [2024-11-19 10:52:04.414676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c590 is same with the state(6) to be set 00:24:14.873 [2024-11-19 10:52:04.414820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.873 [2024-11-19 10:52:04.414831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7f610 with addr=10.0.0.2, port=4420 00:24:14.873 [2024-11-19 10:52:04.414838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7f610 is same with the state(6) to be set 00:24:14.873 [2024-11-19 10:52:04.415994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:14.873 [2024-11-19 10:52:04.416010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:14.873 [2024-11-19 10:52:04.416018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:14.873 [2024-11-19 10:52:04.416028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:14.873 [2024-11-19 10:52:04.416344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.873 [2024-11-19 10:52:04.416359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b0e70 with addr=10.0.0.2, port=4420 00:24:14.873 [2024-11-19 10:52:04.416367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b0e70 is same with the state(6) to be set 00:24:14.873 [2024-11-19 10:52:04.416381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6ad50 (9): Bad file descriptor 00:24:14.873 [2024-11-19 10:52:04.416393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21957a0 (9): Bad file descriptor 00:24:14.873 [2024-11-19 10:52:04.416402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c590 (9): Bad file descriptor 00:24:14.873 [2024-11-19 10:52:04.416411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7f610 (9): Bad file descriptor 00:24:14.873 [2024-11-19 10:52:04.416425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ae930 (9): Bad file descriptor 00:24:14.873 [2024-11-19 10:52:04.416463] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:24:14.873 [2024-11-19 10:52:04.416473] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:24:14.873 [2024-11-19 10:52:04.416483] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:24:14.873 [2024-11-19 10:52:04.416496] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:24:14.873 [2024-11-19 10:52:04.416787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.873 [2024-11-19 10:52:04.416800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b0320 with addr=10.0.0.2, port=4420 00:24:14.873 [2024-11-19 10:52:04.416808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b0320 is same with the state(6) to be set 00:24:14.873 [2024-11-19 10:52:04.417023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.873 [2024-11-19 10:52:04.417034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6b1b0 with addr=10.0.0.2, port=4420 00:24:14.873 [2024-11-19 10:52:04.417041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b1b0 is same with the state(6) to be set 00:24:14.873 [2024-11-19 10:52:04.417186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.873 [2024-11-19 10:52:04.417197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d68c70 with addr=10.0.0.2, port=4420 00:24:14.873 [2024-11-19 10:52:04.417207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68c70 is same with the state(6) to be set 00:24:14.873 [2024-11-19 10:52:04.417350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.873 [2024-11-19 10:52:04.417361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a93e0 with addr=10.0.0.2, port=4420 00:24:14.873 [2024-11-19 10:52:04.417369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a93e0 is same with the state(6) to be set 00:24:14.874 [2024-11-19 10:52:04.417378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b0e70 (9): Bad file descriptor 00:24:14.874 [2024-11-19 10:52:04.417386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:14.874 [2024-11-19 10:52:04.417393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:14.874 [2024-11-19 10:52:04.417400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:14.874 [2024-11-19 10:52:04.417408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:14.874 [2024-11-19 10:52:04.417416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:14.874 [2024-11-19 10:52:04.417422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:14.874 [2024-11-19 10:52:04.417429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:14.874 [2024-11-19 10:52:04.417434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:14.874 [2024-11-19 10:52:04.417441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:14.874 [2024-11-19 10:52:04.417447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:14.874 [2024-11-19 10:52:04.417453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:14.874 [2024-11-19 10:52:04.417459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:14.874 [2024-11-19 10:52:04.417466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:24:14.874 [2024-11-19 10:52:04.417471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:24:14.874 [2024-11-19 10:52:04.417478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:24:14.874 [2024-11-19 10:52:04.417487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:24:14.874 [2024-11-19 10:52:04.417554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b0320 (9): Bad file descriptor 00:24:14.874 [2024-11-19 10:52:04.417564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6b1b0 (9): Bad file descriptor 00:24:14.874 [2024-11-19 10:52:04.417573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d68c70 (9): Bad file descriptor 00:24:14.874 [2024-11-19 10:52:04.417581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a93e0 (9): Bad file descriptor 00:24:14.874 [2024-11-19 10:52:04.417588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:14.874 [2024-11-19 10:52:04.417594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:14.874 [2024-11-19 10:52:04.417601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:14.874 [2024-11-19 10:52:04.417607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:14.874 [2024-11-19 10:52:04.417631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:14.874 [2024-11-19 10:52:04.417638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:14.874 [2024-11-19 10:52:04.417644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:14.874 [2024-11-19 10:52:04.417650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:14.874 [2024-11-19 10:52:04.417657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:14.874 [2024-11-19 10:52:04.417662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:14.874 [2024-11-19 10:52:04.417668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:14.874 [2024-11-19 10:52:04.417674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:14.874 [2024-11-19 10:52:04.417680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:14.874 [2024-11-19 10:52:04.417686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:14.874 [2024-11-19 10:52:04.417692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:14.874 [2024-11-19 10:52:04.417698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:14.874 [2024-11-19 10:52:04.417705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:14.874 [2024-11-19 10:52:04.417710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:14.874 [2024-11-19 10:52:04.417717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:14.874 [2024-11-19 10:52:04.417723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:15.133 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3990086 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3990086 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3990086 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.072 rmmod nvme_tcp 00:24:16.072 rmmod nvme_fabrics 00:24:16.072 rmmod nvme_keyring 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3989809 ']' 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3989809 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3989809 ']' 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3989809 00:24:16.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3989809) - No such process 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3989809 is not found' 00:24:16.072 Process with pid 3989809 is not found 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.072 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.608 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.608 00:24:18.608 real 0m7.713s 00:24:18.608 user 0m18.643s 00:24:18.608 sys 0m1.345s 00:24:18.608 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.608 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:18.608 ************************************ 00:24:18.608 END TEST nvmf_shutdown_tc3 00:24:18.608 ************************************ 00:24:18.608 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:18.608 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:18.608 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:18.608 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:18.608 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.608 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:18.608 ************************************ 00:24:18.608 START TEST nvmf_shutdown_tc4 00:24:18.608 ************************************ 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:18.608 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:18.609 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:18.609 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:18.609 Found net devices under 0000:86:00.0: cvl_0_0 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:18.609 Found net devices under 0000:86:00.1: cvl_0_1 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:18.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:24:18.609 00:24:18.609 --- 10.0.0.2 ping statistics --- 00:24:18.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.609 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:24:18.609 00:24:18.609 --- 10.0.0.1 ping statistics --- 00:24:18.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.609 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3991347 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3991347 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3991347 ']' 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.609 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.610 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.610 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.610 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:18.610 [2024-11-19 10:52:08.371174] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:18.610 [2024-11-19 10:52:08.371222] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.870 [2024-11-19 10:52:08.448985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:18.870 [2024-11-19 10:52:08.490336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.870 [2024-11-19 10:52:08.490374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.870 [2024-11-19 10:52:08.490382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.870 [2024-11-19 10:52:08.490388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.870 [2024-11-19 10:52:08.490392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.870 [2024-11-19 10:52:08.491990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.870 [2024-11-19 10:52:08.492079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.870 [2024-11-19 10:52:08.492112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.870 [2024-11-19 10:52:08.492112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:18.870 [2024-11-19 10:52:08.623392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.870 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.129 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:19.129 Malloc1 00:24:19.129 [2024-11-19 10:52:08.732755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.129 Malloc2 00:24:19.129 Malloc3 00:24:19.129 Malloc4 00:24:19.129 Malloc5 00:24:19.387 Malloc6 00:24:19.387 Malloc7 00:24:19.387 Malloc8 00:24:19.387 Malloc9 00:24:19.387 Malloc10 00:24:19.387 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.387 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:19.387 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:19.387 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:19.387 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3991399 00:24:19.387 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:19.387 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:19.644 [2024-11-19 10:52:09.245190] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3991347 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3991347 ']' 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3991347 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3991347 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3991347' 00:24:24.919 killing process with pid 3991347 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3991347 00:24:24.919 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3991347 00:24:24.919 [2024-11-19 10:52:14.233648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1fe0 is same with the state(6) to be set 00:24:24.919 [2024-11-19 10:52:14.233707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1fe0 is same with the state(6) to be set 00:24:24.919 [2024-11-19 10:52:14.233716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1fe0 is same with the state(6) to be set 00:24:24.919 [2024-11-19 10:52:14.233722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1fe0 is same with the state(6) to be set 00:24:24.919 [2024-11-19 10:52:14.233729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1fe0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.233735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1fe0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.233740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1fe0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.233746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1fe0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.233752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1fe0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.233758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1fe0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.234498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb24b0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.234527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb24b0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.234535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb24b0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.234542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb24b0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.234549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb24b0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.234555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb24b0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.234561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb24b0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.234567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb24b0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.234573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb24b0 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.235188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb2980 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.235220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb2980 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.235228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb2980 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.235241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb2980 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.235248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb2980 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.237062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1a680 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.237084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1a680 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.237091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1a680 is same with the state(6) to be set 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 [2024-11-19 10:52:14.238648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5c00 is same with tstarting I/O failed: -6 00:24:24.920 he state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.238668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5c00 is same with the state(6) to be set 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 [2024-11-19 10:52:14.238676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5c00 is same with the state(6) to be set 00:24:24.920 starting I/O failed: -6 00:24:24.920 [2024-11-19 10:52:14.238683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5c00 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.238689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5c00 is same with the state(6) to be set 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 [2024-11-19 10:52:14.238695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5c00 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.238702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5c00 is same with the state(6) to be set 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 [2024-11-19 10:52:14.238708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5c00 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.238715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5c00 is same with the state(6) to be set 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 [2024-11-19 10:52:14.238736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5c00 is same with the state(6) to be set 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 [2024-11-19 10:52:14.239018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0590 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.239039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0590 is same with the state(6) to be set 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 [2024-11-19 10:52:14.239045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0590 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.239052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0590 is same with the state(6) to be set 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 [2024-11-19 10:52:14.239058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0590 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.239065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0590 is same with the state(6) to be set 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 [2024-11-19 10:52:14.239071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0590 is same with the state(6) to be set 00:24:24.920 starting I/O failed: -6 00:24:24.920 [2024-11-19 10:52:14.239077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0590 is same with the state(6) to be set 00:24:24.920 [2024-11-19 10:52:14.239083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0590 is same with the state(6) to be set 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 [2024-11-19 10:52:14.239089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0590 is same with the state(6) to be set 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.920 Write completed with error (sct=0, sc=8) 00:24:24.920 starting I/O failed: -6 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 [2024-11-19 10:52:14.239352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.921 [2024-11-19 10:52:14.239385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with tNVMe io qpair process completion error 00:24:24.921 he state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0910 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5730 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5730 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5730 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.239996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5730 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5730 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5730 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5730 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5730 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb192e0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb192e0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb192e0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb192e0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb192e0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb192e0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb192e0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb192e0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb192e0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.240728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb192e0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb197d0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb197d0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb197d0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb197d0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb197d0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb197d0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb197d0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb197d0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb197d0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19cc0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19cc0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19cc0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19cc0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.241701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19cc0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.242146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0de0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.242165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0de0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.242172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0de0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.242179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0de0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.242185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0de0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.242196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0de0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.242212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0de0 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.242219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0de0 is same with the state(6) to be set 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 starting I/O failed: -6 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 starting I/O failed: -6 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 starting I/O failed: -6 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 starting I/O failed: -6 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 starting I/O failed: -6 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 starting I/O failed: -6 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 [2024-11-19 10:52:14.248483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1cd80 is same with tWrite completed with error (sct=0, sc=8) 00:24:24.921 he state(6) to be set 00:24:24.921 starting I/O failed: -6 00:24:24.921 [2024-11-19 10:52:14.248505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1cd80 is same with the state(6) to be set 00:24:24.921 [2024-11-19 10:52:14.248512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1cd80 is same with the state(6) to be set 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 starting I/O failed: -6 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 starting I/O failed: -6 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 starting I/O failed: -6 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 Write completed with error (sct=0, sc=8) 00:24:24.921 [2024-11-19 10:52:14.248740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:24.922 [2024-11-19 10:52:14.248756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d250 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.248775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d250 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.248782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d250 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.248789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d250 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.248795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d250 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.248801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d250 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.248807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d250 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.248813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d250 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.248818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d250 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.248824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d250 is same with the state(6) to be set 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 [2024-11-19 10:52:14.249208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d740 is same with the state(6) to be set 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 [2024-11-19 10:52:14.249227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d740 is same with the state(6) to be set 00:24:24.922 starting I/O failed: -6 00:24:24.922 [2024-11-19 10:52:14.249235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d740 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.249242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d740 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.249248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d740 is same with the state(6) to be set 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 [2024-11-19 10:52:14.249254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1d740 is same with the state(6) to be set 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 [2024-11-19 10:52:14.249553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:24.922 [2024-11-19 10:52:14.249610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c8b0 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.249630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c8b0 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.249638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c8b0 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.249644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c8b0 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.249650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c8b0 is same with the state(6) to be set 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 [2024-11-19 10:52:14.249656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c8b0 is same with the state(6) to be set 00:24:24.922 starting I/O failed: -6 00:24:24.922 [2024-11-19 10:52:14.249662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c8b0 is same with the state(6) to be set 00:24:24.922 [2024-11-19 10:52:14.249673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c8b0 is same with the state(6) to be set 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 [2024-11-19 10:52:14.250572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.922 Write completed with error (sct=0, sc=8) 00:24:24.922 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 [2024-11-19 10:52:14.252030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 [2024-11-19 10:52:14.252104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 starting I/O failed: -6 00:24:24.923 [2024-11-19 10:52:14.252111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e0e0 is same with the state(6) to be set 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 [2024-11-19 10:52:14.252281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.923 NVMe io qpair process completion error 00:24:24.923 [2024-11-19 10:52:14.252414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaff60 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaff60 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaff60 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaff60 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaff60 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaff60 is same with the state(6) to be set 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 starting I/O failed: -6 00:24:24.923 [2024-11-19 10:52:14.252876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb0450 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb0450 is same with the state(6) to be set 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 [2024-11-19 10:52:14.252894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb0450 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb0450 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb0450 is same with tWrite completed with error (sct=0, sc=8) 00:24:24.923 he state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb0450 is same with the state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb0450 is same with tWrite completed with error (sct=0, sc=8) 00:24:24.923 he state(6) to be set 00:24:24.923 [2024-11-19 10:52:14.252927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb0450 is same with the state(6) to be set 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 [2024-11-19 10:52:14.252936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb0450 is same with the state(6) to be set 00:24:24.923 starting I/O failed: -6 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.923 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 [2024-11-19 10:52:14.253175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:24.924 [2024-11-19 10:52:14.253229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1dc10 is same with the state(6) to be set 00:24:24.924 [2024-11-19 10:52:14.253246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1dc10 is same with the state(6) to be set 00:24:24.924 [2024-11-19 10:52:14.253252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1dc10 is same with the state(6) to be set 00:24:24.924 [2024-11-19 10:52:14.253258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1dc10 is same with the state(6) to be set 00:24:24.924 [2024-11-19 10:52:14.253264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1dc10 is same with the state(6) to be set 00:24:24.924 [2024-11-19 10:52:14.253271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1dc10 is same with the state(6) to be set 00:24:24.924 [2024-11-19 10:52:14.253276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1dc10 is same with the state(6) to be set 00:24:24.924 [2024-11-19 10:52:14.253282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1dc10 is same with the state(6) to be set 00:24:24.924 [2024-11-19 10:52:14.253288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1dc10 is same with the state(6) to be set 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 [2024-11-19 10:52:14.254080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.924 starting I/O failed: -6 00:24:24.924 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 [2024-11-19 10:52:14.255112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 [2024-11-19 10:52:14.256850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.925 NVMe io qpair process completion error 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.925 starting I/O failed: -6 00:24:24.925 Write completed with error (sct=0, sc=8) 00:24:24.926 [2024-11-19 10:52:14.257896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 [2024-11-19 10:52:14.258784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 [2024-11-19 10:52:14.259794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.926 Write completed with error (sct=0, sc=8) 00:24:24.926 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 [2024-11-19 10:52:14.261651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.927 NVMe io qpair process completion error 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 [2024-11-19 10:52:14.262806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 [2024-11-19 10:52:14.263710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 starting I/O failed: -6 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.927 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 [2024-11-19 10:52:14.264732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 [2024-11-19 10:52:14.269276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.928 NVMe io qpair process completion error 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.928 [2024-11-19 10:52:14.270289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.928 starting I/O failed: -6 00:24:24.928 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 [2024-11-19 10:52:14.271190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 [2024-11-19 10:52:14.272196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.929 starting I/O failed: -6 00:24:24.929 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 [2024-11-19 10:52:14.274496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:24.930 NVMe io qpair process completion error 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 [2024-11-19 10:52:14.275689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 [2024-11-19 10:52:14.276628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.930 Write completed with error (sct=0, sc=8) 00:24:24.930 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 [2024-11-19 10:52:14.277659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 starting I/O failed: -6 00:24:24.931 [2024-11-19 10:52:14.279921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.931 NVMe io qpair process completion error 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.931 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 [2024-11-19 10:52:14.283723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.932 Write completed with error (sct=0, sc=8) 00:24:24.932 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 [2024-11-19 10:52:14.285415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 [2024-11-19 10:52:14.288317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.933 NVMe io qpair process completion error 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 Write completed with error (sct=0, sc=8) 00:24:24.933 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 [2024-11-19 10:52:14.290849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.934 Write completed with error (sct=0, sc=8) 00:24:24.934 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 [2024-11-19 10:52:14.295225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.935 NVMe io qpair process completion error 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 [2024-11-19 10:52:14.296245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 [2024-11-19 10:52:14.297011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.935 starting I/O failed: -6 00:24:24.935 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 [2024-11-19 10:52:14.298077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 Write completed with error (sct=0, sc=8) 00:24:24.936 starting I/O failed: -6 00:24:24.936 [2024-11-19 10:52:14.300615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.936 NVMe io qpair process completion error 00:24:24.936 Initializing NVMe Controllers 00:24:24.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:24.936 Controller IO queue size 128, less than required. 00:24:24.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:24.936 Controller IO queue size 128, less than required. 00:24:24.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:24.936 Controller IO queue size 128, less than required. 00:24:24.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:24.936 Controller IO queue size 128, less than required. 00:24:24.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:24.936 Controller IO queue size 128, less than required. 00:24:24.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:24.937 Controller IO queue size 128, less than required. 00:24:24.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:24.937 Controller IO queue size 128, less than required. 00:24:24.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.937 Controller IO queue size 128, less than required. 00:24:24.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:24.937 Controller IO queue size 128, less than required. 00:24:24.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:24.937 Controller IO queue size 128, less than required. 00:24:24.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:24.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:24.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:24.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:24.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:24.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:24.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:24.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:24.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:24.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:24.937 Initialization complete. Launching workers. 00:24:24.937 ======================================================== 00:24:24.937 Latency(us) 00:24:24.937 Device Information : IOPS MiB/s Average min max 00:24:24.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2182.95 93.80 58641.34 659.28 108458.03 00:24:24.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2221.99 95.48 57633.60 817.57 111127.36 00:24:24.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2185.34 93.90 58454.73 783.80 103452.89 00:24:24.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2204.64 94.73 58094.74 529.77 102015.54 00:24:24.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2206.16 94.80 58076.35 821.17 101292.06 00:24:24.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2207.68 94.86 58083.16 653.07 100089.81 00:24:24.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2140.88 91.99 59231.52 718.87 101080.67 00:24:24.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2146.95 92.25 59074.71 853.37 98730.10 00:24:24.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2212.88 95.08 57328.05 746.24 101169.81 00:24:24.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2204.43 94.72 57565.03 488.90 98045.58 00:24:24.937 ======================================================== 00:24:24.937 Total : 21913.90 941.61 58211.67 488.90 111127.36 00:24:24.937 00:24:24.937 [2024-11-19 10:52:14.303610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149ae0 is same with the state(6) to be set 00:24:24.937 [2024-11-19 10:52:14.303653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147560 is same with the state(6) to be set 00:24:24.937 [2024-11-19 10:52:14.303681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149900 is same with the state(6) to be set 00:24:24.937 [2024-11-19 10:52:14.303710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148a70 is same with the state(6) to be set 00:24:24.937 [2024-11-19 10:52:14.303737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147ef0 is same with the state(6) to be set 00:24:24.937 [2024-11-19 10:52:14.303764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148740 is same with the state(6) to be set 00:24:24.937 [2024-11-19 10:52:14.303792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147890 is same with the state(6) to be set 00:24:24.937 [2024-11-19 10:52:14.303819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149720 is same with the state(6) to be set 00:24:24.937 [2024-11-19 10:52:14.303847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147bc0 is same with the state(6) to be set 00:24:24.937 [2024-11-19 10:52:14.303877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148410 is same with the state(6) to be set 00:24:24.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:24.937 10:52:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:25.875 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3991399 00:24:25.875 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:25.875 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3991399 00:24:25.875 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3991399 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.876 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.876 rmmod nvme_tcp 00:24:25.876 rmmod nvme_fabrics 00:24:26.136 rmmod nvme_keyring 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3991347 ']' 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3991347 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3991347 ']' 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3991347 00:24:26.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3991347) - No such process 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3991347 is not found' 00:24:26.136 Process with pid 3991347 is not found 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.136 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.041 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:28.041 00:24:28.041 real 0m9.769s 00:24:28.041 user 0m24.984s 00:24:28.041 sys 0m5.070s 00:24:28.041 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.041 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:28.041 ************************************ 00:24:28.041 END TEST nvmf_shutdown_tc4 00:24:28.041 ************************************ 00:24:28.041 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:28.041 00:24:28.041 real 0m41.681s 00:24:28.041 user 1m44.068s 00:24:28.041 sys 0m13.961s 00:24:28.041 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.041 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:28.041 ************************************ 00:24:28.041 END TEST nvmf_shutdown 00:24:28.041 ************************************ 00:24:28.301 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:28.301 10:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:28.301 10:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.301 10:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:28.301 ************************************ 00:24:28.301 START TEST nvmf_nsid 00:24:28.301 ************************************ 00:24:28.301 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:28.301 * Looking for test storage... 00:24:28.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:28.301 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:28.301 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:28.301 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:28.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.301 --rc genhtml_branch_coverage=1 00:24:28.301 --rc genhtml_function_coverage=1 00:24:28.301 --rc genhtml_legend=1 00:24:28.301 --rc geninfo_all_blocks=1 00:24:28.301 --rc geninfo_unexecuted_blocks=1 00:24:28.301 00:24:28.301 ' 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:28.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.301 --rc genhtml_branch_coverage=1 00:24:28.301 --rc genhtml_function_coverage=1 00:24:28.301 --rc genhtml_legend=1 00:24:28.301 --rc geninfo_all_blocks=1 00:24:28.301 --rc geninfo_unexecuted_blocks=1 00:24:28.301 00:24:28.301 ' 00:24:28.301 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:28.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.302 --rc genhtml_branch_coverage=1 00:24:28.302 --rc genhtml_function_coverage=1 00:24:28.302 --rc genhtml_legend=1 00:24:28.302 --rc geninfo_all_blocks=1 00:24:28.302 --rc geninfo_unexecuted_blocks=1 00:24:28.302 00:24:28.302 ' 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:28.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.302 --rc genhtml_branch_coverage=1 00:24:28.302 --rc genhtml_function_coverage=1 00:24:28.302 --rc genhtml_legend=1 00:24:28.302 --rc geninfo_all_blocks=1 00:24:28.302 --rc geninfo_unexecuted_blocks=1 00:24:28.302 00:24:28.302 ' 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.302 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.561 10:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:35.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:35.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:35.189 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:35.190 Found net devices under 0000:86:00.0: cvl_0_0 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:35.190 Found net devices under 0000:86:00.1: cvl_0_1 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:35.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:24:35.190 00:24:35.190 --- 10.0.0.2 ping statistics --- 00:24:35.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.190 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:24:35.190 00:24:35.190 --- 10.0.0.1 ping statistics --- 00:24:35.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.190 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:35.190 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3996043 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3996043 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3996043 ']' 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:35.190 [2024-11-19 10:52:24.078736] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:35.190 [2024-11-19 10:52:24.078792] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.190 [2024-11-19 10:52:24.157904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.190 [2024-11-19 10:52:24.199748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.190 [2024-11-19 10:52:24.199784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.190 [2024-11-19 10:52:24.199792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.190 [2024-11-19 10:52:24.199798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.190 [2024-11-19 10:52:24.199803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.190 [2024-11-19 10:52:24.200365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3996098 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.190 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=584f387d-2f0e-4d62-93ef-3e84c3178b4c 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=e4d1e8cd-a086-449a-9a13-48145fd15635 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=268dd705-f087-498f-81ac-2558a1c24609 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:35.191 null0 00:24:35.191 null1 00:24:35.191 [2024-11-19 10:52:24.392051] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:35.191 [2024-11-19 10:52:24.392095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996098 ] 00:24:35.191 null2 00:24:35.191 [2024-11-19 10:52:24.400017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.191 [2024-11-19 10:52:24.424232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3996098 /var/tmp/tgt2.sock 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3996098 ']' 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:35.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:35.191 [2024-11-19 10:52:24.467413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.191 [2024-11-19 10:52:24.511208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:35.191 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:35.461 [2024-11-19 10:52:25.037065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.461 [2024-11-19 10:52:25.053178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:35.461 nvme0n1 nvme0n2 00:24:35.461 nvme1n1 00:24:35.461 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:35.461 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:35.461 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:36.397 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 584f387d-2f0e-4d62-93ef-3e84c3178b4c 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=584f387d2f0e4d6293ef3e84c3178b4c 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 584F387D2F0E4D6293EF3E84C3178B4C 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 584F387D2F0E4D6293EF3E84C3178B4C == \5\8\4\F\3\8\7\D\2\F\0\E\4\D\6\2\9\3\E\F\3\E\8\4\C\3\1\7\8\B\4\C ]] 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid e4d1e8cd-a086-449a-9a13-48145fd15635 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e4d1e8cda086449a9a1348145fd15635 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E4D1E8CDA086449A9A1348145FD15635 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ E4D1E8CDA086449A9A1348145FD15635 == \E\4\D\1\E\8\C\D\A\0\8\6\4\4\9\A\9\A\1\3\4\8\1\4\5\F\D\1\5\6\3\5 ]] 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 268dd705-f087-498f-81ac-2558a1c24609 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=268dd705f087498f81ac2558a1c24609 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 268DD705F087498F81AC2558A1C24609 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 268DD705F087498F81AC2558A1C24609 == \2\6\8\D\D\7\0\5\F\0\8\7\4\9\8\F\8\1\A\C\2\5\5\8\A\1\C\2\4\6\0\9 ]] 00:24:37.775 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3996098 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3996098 ']' 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3996098 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3996098 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3996098' 00:24:38.035 killing process with pid 3996098 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3996098 00:24:38.035 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3996098 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.294 rmmod nvme_tcp 00:24:38.294 rmmod nvme_fabrics 00:24:38.294 rmmod nvme_keyring 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3996043 ']' 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3996043 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3996043 ']' 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3996043 00:24:38.294 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:38.294 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.294 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3996043 00:24:38.294 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.294 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.294 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3996043' 00:24:38.294 killing process with pid 3996043 00:24:38.294 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3996043 00:24:38.294 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3996043 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.553 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.090 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.090 00:24:41.090 real 0m12.389s 00:24:41.090 user 0m9.583s 00:24:41.090 sys 0m5.549s 00:24:41.090 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.090 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:41.090 ************************************ 00:24:41.090 END TEST nvmf_nsid 00:24:41.090 ************************************ 00:24:41.090 10:52:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:41.090 00:24:41.090 real 12m3.873s 00:24:41.090 user 25m54.979s 00:24:41.090 sys 3m39.621s 00:24:41.090 10:52:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.090 10:52:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:41.090 ************************************ 00:24:41.090 END TEST nvmf_target_extra 00:24:41.090 ************************************ 00:24:41.090 10:52:30 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:41.090 10:52:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.090 10:52:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.090 10:52:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:41.090 ************************************ 00:24:41.090 START TEST nvmf_host 00:24:41.090 ************************************ 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:41.090 * Looking for test storage... 00:24:41.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.090 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:41.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.091 --rc genhtml_branch_coverage=1 00:24:41.091 --rc genhtml_function_coverage=1 00:24:41.091 --rc genhtml_legend=1 00:24:41.091 --rc geninfo_all_blocks=1 00:24:41.091 --rc geninfo_unexecuted_blocks=1 00:24:41.091 00:24:41.091 ' 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:41.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.091 --rc genhtml_branch_coverage=1 00:24:41.091 --rc genhtml_function_coverage=1 00:24:41.091 --rc genhtml_legend=1 00:24:41.091 --rc geninfo_all_blocks=1 00:24:41.091 --rc geninfo_unexecuted_blocks=1 00:24:41.091 00:24:41.091 ' 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:41.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.091 --rc genhtml_branch_coverage=1 00:24:41.091 --rc genhtml_function_coverage=1 00:24:41.091 --rc genhtml_legend=1 00:24:41.091 --rc geninfo_all_blocks=1 00:24:41.091 --rc geninfo_unexecuted_blocks=1 00:24:41.091 00:24:41.091 ' 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:41.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.091 --rc genhtml_branch_coverage=1 00:24:41.091 --rc genhtml_function_coverage=1 00:24:41.091 --rc genhtml_legend=1 00:24:41.091 --rc geninfo_all_blocks=1 00:24:41.091 --rc geninfo_unexecuted_blocks=1 00:24:41.091 00:24:41.091 ' 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.091 ************************************ 00:24:41.091 START TEST nvmf_multicontroller 00:24:41.091 ************************************ 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:41.091 * Looking for test storage... 00:24:41.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.091 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.092 --rc genhtml_branch_coverage=1 00:24:41.092 --rc genhtml_function_coverage=1 00:24:41.092 --rc genhtml_legend=1 00:24:41.092 --rc geninfo_all_blocks=1 00:24:41.092 --rc geninfo_unexecuted_blocks=1 00:24:41.092 00:24:41.092 ' 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.092 --rc genhtml_branch_coverage=1 00:24:41.092 --rc genhtml_function_coverage=1 00:24:41.092 --rc genhtml_legend=1 00:24:41.092 --rc geninfo_all_blocks=1 00:24:41.092 --rc geninfo_unexecuted_blocks=1 00:24:41.092 00:24:41.092 ' 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.092 --rc genhtml_branch_coverage=1 00:24:41.092 --rc genhtml_function_coverage=1 00:24:41.092 --rc genhtml_legend=1 00:24:41.092 --rc geninfo_all_blocks=1 00:24:41.092 --rc geninfo_unexecuted_blocks=1 00:24:41.092 00:24:41.092 ' 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.092 --rc genhtml_branch_coverage=1 00:24:41.092 --rc genhtml_function_coverage=1 00:24:41.092 --rc genhtml_legend=1 00:24:41.092 --rc geninfo_all_blocks=1 00:24:41.092 --rc geninfo_unexecuted_blocks=1 00:24:41.092 00:24:41.092 ' 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:41.092 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:41.093 10:52:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:47.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:47.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.661 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:47.662 Found net devices under 0000:86:00.0: cvl_0_0 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:47.662 Found net devices under 0000:86:00.1: cvl_0_1 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:47.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:24:47.662 00:24:47.662 --- 10.0.0.2 ping statistics --- 00:24:47.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.662 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:24:47.662 00:24:47.662 --- 10.0.0.1 ping statistics --- 00:24:47.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.662 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=4000237 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 4000237 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 4000237 ']' 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.662 10:52:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.662 [2024-11-19 10:52:36.720531] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:47.662 [2024-11-19 10:52:36.720575] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.662 [2024-11-19 10:52:36.800596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:47.662 [2024-11-19 10:52:36.842568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.662 [2024-11-19 10:52:36.842604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.662 [2024-11-19 10:52:36.842611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.662 [2024-11-19 10:52:36.842616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.662 [2024-11-19 10:52:36.842622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.662 [2024-11-19 10:52:36.844120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.662 [2024-11-19 10:52:36.844260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.662 [2024-11-19 10:52:36.844260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.921 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.921 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:47.921 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:47.921 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.921 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.921 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.921 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:47.921 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.921 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.922 [2024-11-19 10:52:37.598296] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.922 Malloc0 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.922 [2024-11-19 10:52:37.660280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.922 [2024-11-19 10:52:37.668221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.922 Malloc1 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.922 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4000447 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4000447 /var/tmp/bdevperf.sock 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 4000447 ']' 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.180 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.439 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.439 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:48.439 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:48.439 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.439 10:52:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.439 NVMe0n1 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.439 1 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.439 request: 00:24:48.439 { 00:24:48.439 "name": "NVMe0", 00:24:48.439 "trtype": "tcp", 00:24:48.439 "traddr": "10.0.0.2", 00:24:48.439 "adrfam": "ipv4", 00:24:48.439 "trsvcid": "4420", 00:24:48.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.439 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:48.439 "hostaddr": "10.0.0.1", 00:24:48.439 "prchk_reftag": false, 00:24:48.439 "prchk_guard": false, 00:24:48.439 "hdgst": false, 00:24:48.439 "ddgst": false, 00:24:48.439 "allow_unrecognized_csi": false, 00:24:48.439 "method": "bdev_nvme_attach_controller", 00:24:48.439 "req_id": 1 00:24:48.439 } 00:24:48.439 Got JSON-RPC error response 00:24:48.439 response: 00:24:48.439 { 00:24:48.439 "code": -114, 00:24:48.439 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:48.439 } 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:48.439 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.440 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:48.440 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.440 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:48.440 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.440 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.698 request: 00:24:48.698 { 00:24:48.698 "name": "NVMe0", 00:24:48.698 "trtype": "tcp", 00:24:48.698 "traddr": "10.0.0.2", 00:24:48.698 "adrfam": "ipv4", 00:24:48.698 "trsvcid": "4420", 00:24:48.698 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:48.698 "hostaddr": "10.0.0.1", 00:24:48.698 "prchk_reftag": false, 00:24:48.698 "prchk_guard": false, 00:24:48.698 "hdgst": false, 00:24:48.698 "ddgst": false, 00:24:48.698 "allow_unrecognized_csi": false, 00:24:48.698 "method": "bdev_nvme_attach_controller", 00:24:48.698 "req_id": 1 00:24:48.698 } 00:24:48.698 Got JSON-RPC error response 00:24:48.698 response: 00:24:48.698 { 00:24:48.698 "code": -114, 00:24:48.698 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:48.698 } 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.698 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.698 request: 00:24:48.698 { 00:24:48.698 "name": "NVMe0", 00:24:48.698 "trtype": "tcp", 00:24:48.698 "traddr": "10.0.0.2", 00:24:48.698 "adrfam": "ipv4", 00:24:48.698 "trsvcid": "4420", 00:24:48.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.698 "hostaddr": "10.0.0.1", 00:24:48.699 "prchk_reftag": false, 00:24:48.699 "prchk_guard": false, 00:24:48.699 "hdgst": false, 00:24:48.699 "ddgst": false, 00:24:48.699 "multipath": "disable", 00:24:48.699 "allow_unrecognized_csi": false, 00:24:48.699 "method": "bdev_nvme_attach_controller", 00:24:48.699 "req_id": 1 00:24:48.699 } 00:24:48.699 Got JSON-RPC error response 00:24:48.699 response: 00:24:48.699 { 00:24:48.699 "code": -114, 00:24:48.699 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:48.699 } 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.699 request: 00:24:48.699 { 00:24:48.699 "name": "NVMe0", 00:24:48.699 "trtype": "tcp", 00:24:48.699 "traddr": "10.0.0.2", 00:24:48.699 "adrfam": "ipv4", 00:24:48.699 "trsvcid": "4420", 00:24:48.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.699 "hostaddr": "10.0.0.1", 00:24:48.699 "prchk_reftag": false, 00:24:48.699 "prchk_guard": false, 00:24:48.699 "hdgst": false, 00:24:48.699 "ddgst": false, 00:24:48.699 "multipath": "failover", 00:24:48.699 "allow_unrecognized_csi": false, 00:24:48.699 "method": "bdev_nvme_attach_controller", 00:24:48.699 "req_id": 1 00:24:48.699 } 00:24:48.699 Got JSON-RPC error response 00:24:48.699 response: 00:24:48.699 { 00:24:48.699 "code": -114, 00:24:48.699 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:48.699 } 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.699 NVMe0n1 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.699 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.958 00:24:48.958 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.958 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.958 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:48.958 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.958 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.958 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.958 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:48.958 10:52:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.334 { 00:24:50.334 "results": [ 00:24:50.334 { 00:24:50.334 "job": "NVMe0n1", 00:24:50.334 "core_mask": "0x1", 00:24:50.334 "workload": "write", 00:24:50.334 "status": "finished", 00:24:50.334 "queue_depth": 128, 00:24:50.334 "io_size": 4096, 00:24:50.334 "runtime": 1.005702, 00:24:50.334 "iops": 24733.966920618634, 00:24:50.334 "mibps": 96.61705828366654, 00:24:50.334 "io_failed": 0, 00:24:50.334 "io_timeout": 0, 00:24:50.334 "avg_latency_us": 5163.514163273511, 00:24:50.334 "min_latency_us": 2871.1009523809525, 00:24:50.334 "max_latency_us": 13294.445714285714 00:24:50.334 } 00:24:50.334 ], 00:24:50.334 "core_count": 1 00:24:50.334 } 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 4000447 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 4000447 ']' 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 4000447 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4000447 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4000447' 00:24:50.334 killing process with pid 4000447 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 4000447 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 4000447 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:50.334 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:50.334 [2024-11-19 10:52:37.776409] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:50.334 [2024-11-19 10:52:37.776464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000447 ] 00:24:50.334 [2024-11-19 10:52:37.851639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.334 [2024-11-19 10:52:37.893899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.334 [2024-11-19 10:52:38.578274] bdev.c:4686:bdev_name_add: *ERROR*: Bdev name 5fa73481-fc03-4952-8dfc-5fa340bcf5f7 already exists 00:24:50.334 [2024-11-19 10:52:38.578303] bdev.c:7824:bdev_register: *ERROR*: Unable to add uuid:5fa73481-fc03-4952-8dfc-5fa340bcf5f7 alias for bdev NVMe1n1 00:24:50.334 [2024-11-19 10:52:38.578311] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:50.334 Running I/O for 1 seconds... 00:24:50.334 24684.00 IOPS, 96.42 MiB/s 00:24:50.334 Latency(us) 00:24:50.334 [2024-11-19T09:52:40.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.334 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:50.334 NVMe0n1 : 1.01 24733.97 96.62 0.00 0.00 5163.51 2871.10 13294.45 00:24:50.334 [2024-11-19T09:52:40.126Z] =================================================================================================================== 00:24:50.334 [2024-11-19T09:52:40.126Z] Total : 24733.97 96.62 0.00 0.00 5163.51 2871.10 13294.45 00:24:50.334 Received shutdown signal, test time was about 1.000000 seconds 00:24:50.334 00:24:50.334 Latency(us) 00:24:50.334 [2024-11-19T09:52:40.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.334 [2024-11-19T09:52:40.126Z] =================================================================================================================== 00:24:50.334 [2024-11-19T09:52:40.126Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.334 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.334 10:52:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.334 rmmod nvme_tcp 00:24:50.334 rmmod nvme_fabrics 00:24:50.334 rmmod nvme_keyring 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 4000237 ']' 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 4000237 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 4000237 ']' 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 4000237 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4000237 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:50.334 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4000237' 00:24:50.335 killing process with pid 4000237 00:24:50.335 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 4000237 00:24:50.335 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 4000237 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.593 10:52:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.131 00:24:53.131 real 0m11.736s 00:24:53.131 user 0m14.431s 00:24:53.131 sys 0m5.157s 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.131 ************************************ 00:24:53.131 END TEST nvmf_multicontroller 00:24:53.131 ************************************ 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.131 ************************************ 00:24:53.131 START TEST nvmf_aer 00:24:53.131 ************************************ 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:53.131 * Looking for test storage... 00:24:53.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:53.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.131 --rc genhtml_branch_coverage=1 00:24:53.131 --rc genhtml_function_coverage=1 00:24:53.131 --rc genhtml_legend=1 00:24:53.131 --rc geninfo_all_blocks=1 00:24:53.131 --rc geninfo_unexecuted_blocks=1 00:24:53.131 00:24:53.131 ' 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:53.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.131 --rc genhtml_branch_coverage=1 00:24:53.131 --rc genhtml_function_coverage=1 00:24:53.131 --rc genhtml_legend=1 00:24:53.131 --rc geninfo_all_blocks=1 00:24:53.131 --rc geninfo_unexecuted_blocks=1 00:24:53.131 00:24:53.131 ' 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:53.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.131 --rc genhtml_branch_coverage=1 00:24:53.131 --rc genhtml_function_coverage=1 00:24:53.131 --rc genhtml_legend=1 00:24:53.131 --rc geninfo_all_blocks=1 00:24:53.131 --rc geninfo_unexecuted_blocks=1 00:24:53.131 00:24:53.131 ' 00:24:53.131 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:53.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.131 --rc genhtml_branch_coverage=1 00:24:53.131 --rc genhtml_function_coverage=1 00:24:53.132 --rc genhtml_legend=1 00:24:53.132 --rc geninfo_all_blocks=1 00:24:53.132 --rc geninfo_unexecuted_blocks=1 00:24:53.132 00:24:53.132 ' 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.132 10:52:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:59.704 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:59.705 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:59.705 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:59.705 Found net devices under 0000:86:00.0: cvl_0_0 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:59.705 Found net devices under 0000:86:00.1: cvl_0_1 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:24:59.705 00:24:59.705 --- 10.0.0.2 ping statistics --- 00:24:59.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.705 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:24:59.705 00:24:59.705 --- 10.0.0.1 ping statistics --- 00:24:59.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.705 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=4004438 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 4004438 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 4004438 ']' 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.705 10:52:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.705 [2024-11-19 10:52:48.678876] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:59.705 [2024-11-19 10:52:48.678926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.705 [2024-11-19 10:52:48.758596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.705 [2024-11-19 10:52:48.801469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.705 [2024-11-19 10:52:48.801506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.705 [2024-11-19 10:52:48.801512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.705 [2024-11-19 10:52:48.801519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.705 [2024-11-19 10:52:48.801524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.705 [2024-11-19 10:52:48.803110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.705 [2024-11-19 10:52:48.803231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.705 [2024-11-19 10:52:48.803295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.705 [2024-11-19 10:52:48.803296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.964 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.964 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:59.964 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:59.964 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 [2024-11-19 10:52:49.558421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 Malloc0 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 [2024-11-19 10:52:49.621711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.965 [ 00:24:59.965 { 00:24:59.965 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:59.965 "subtype": "Discovery", 00:24:59.965 "listen_addresses": [], 00:24:59.965 "allow_any_host": true, 00:24:59.965 "hosts": [] 00:24:59.965 }, 00:24:59.965 { 00:24:59.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.965 "subtype": "NVMe", 00:24:59.965 "listen_addresses": [ 00:24:59.965 { 00:24:59.965 "trtype": "TCP", 00:24:59.965 "adrfam": "IPv4", 00:24:59.965 "traddr": "10.0.0.2", 00:24:59.965 "trsvcid": "4420" 00:24:59.965 } 00:24:59.965 ], 00:24:59.965 "allow_any_host": true, 00:24:59.965 "hosts": [], 00:24:59.965 "serial_number": "SPDK00000000000001", 00:24:59.965 "model_number": "SPDK bdev Controller", 00:24:59.965 "max_namespaces": 2, 00:24:59.965 "min_cntlid": 1, 00:24:59.965 "max_cntlid": 65519, 00:24:59.965 "namespaces": [ 00:24:59.965 { 00:24:59.965 "nsid": 1, 00:24:59.965 "bdev_name": "Malloc0", 00:24:59.965 "name": "Malloc0", 00:24:59.965 "nguid": "6A5AF63C5FB04906B9350B4A1A50350B", 00:24:59.965 "uuid": "6a5af63c-5fb0-4906-b935-0b4a1a50350b" 00:24:59.965 } 00:24:59.965 ] 00:24:59.965 } 00:24:59.965 ] 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=4004536 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:59.965 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:00.222 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:00.222 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:25:00.222 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:25:00.222 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:00.222 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:00.222 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:00.222 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:25:00.222 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:00.223 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.223 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.223 Malloc1 00:25:00.223 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.223 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:00.223 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.223 10:52:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.223 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.223 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:00.223 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.223 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.481 Asynchronous Event Request test 00:25:00.481 Attaching to 10.0.0.2 00:25:00.481 Attached to 10.0.0.2 00:25:00.481 Registering asynchronous event callbacks... 00:25:00.481 Starting namespace attribute notice tests for all controllers... 00:25:00.481 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:00.481 aer_cb - Changed Namespace 00:25:00.481 Cleaning up... 00:25:00.481 [ 00:25:00.481 { 00:25:00.481 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:00.481 "subtype": "Discovery", 00:25:00.481 "listen_addresses": [], 00:25:00.481 "allow_any_host": true, 00:25:00.481 "hosts": [] 00:25:00.481 }, 00:25:00.481 { 00:25:00.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.481 "subtype": "NVMe", 00:25:00.481 "listen_addresses": [ 00:25:00.481 { 00:25:00.481 "trtype": "TCP", 00:25:00.481 "adrfam": "IPv4", 00:25:00.481 "traddr": "10.0.0.2", 00:25:00.481 "trsvcid": "4420" 00:25:00.481 } 00:25:00.481 ], 00:25:00.481 "allow_any_host": true, 00:25:00.481 "hosts": [], 00:25:00.481 "serial_number": "SPDK00000000000001", 00:25:00.481 "model_number": "SPDK bdev Controller", 00:25:00.481 "max_namespaces": 2, 00:25:00.481 "min_cntlid": 1, 00:25:00.481 "max_cntlid": 65519, 00:25:00.481 "namespaces": [ 00:25:00.481 { 00:25:00.481 "nsid": 1, 00:25:00.481 "bdev_name": "Malloc0", 00:25:00.481 "name": "Malloc0", 00:25:00.481 "nguid": "6A5AF63C5FB04906B9350B4A1A50350B", 00:25:00.481 "uuid": "6a5af63c-5fb0-4906-b935-0b4a1a50350b" 00:25:00.481 }, 00:25:00.481 { 00:25:00.481 "nsid": 2, 00:25:00.481 "bdev_name": "Malloc1", 00:25:00.481 "name": "Malloc1", 00:25:00.481 "nguid": "BAA0026BEBF24B208D397512ED51CC8C", 00:25:00.481 "uuid": "baa0026b-ebf2-4b20-8d39-7512ed51cc8c" 00:25:00.481 } 00:25:00.481 ] 00:25:00.481 } 00:25:00.481 ] 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 4004536 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.481 rmmod nvme_tcp 00:25:00.481 rmmod nvme_fabrics 00:25:00.481 rmmod nvme_keyring 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 4004438 ']' 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 4004438 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 4004438 ']' 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 4004438 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4004438 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4004438' 00:25:00.481 killing process with pid 4004438 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 4004438 00:25:00.481 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 4004438 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.740 10:52:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.642 10:52:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:02.642 00:25:02.642 real 0m9.986s 00:25:02.642 user 0m8.053s 00:25:02.642 sys 0m4.965s 00:25:02.642 10:52:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.642 10:52:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:02.642 ************************************ 00:25:02.642 END TEST nvmf_aer 00:25:02.642 ************************************ 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.901 ************************************ 00:25:02.901 START TEST nvmf_async_init 00:25:02.901 ************************************ 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:02.901 * Looking for test storage... 00:25:02.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.901 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:02.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.902 --rc genhtml_branch_coverage=1 00:25:02.902 --rc genhtml_function_coverage=1 00:25:02.902 --rc genhtml_legend=1 00:25:02.902 --rc geninfo_all_blocks=1 00:25:02.902 --rc geninfo_unexecuted_blocks=1 00:25:02.902 00:25:02.902 ' 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:02.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.902 --rc genhtml_branch_coverage=1 00:25:02.902 --rc genhtml_function_coverage=1 00:25:02.902 --rc genhtml_legend=1 00:25:02.902 --rc geninfo_all_blocks=1 00:25:02.902 --rc geninfo_unexecuted_blocks=1 00:25:02.902 00:25:02.902 ' 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:02.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.902 --rc genhtml_branch_coverage=1 00:25:02.902 --rc genhtml_function_coverage=1 00:25:02.902 --rc genhtml_legend=1 00:25:02.902 --rc geninfo_all_blocks=1 00:25:02.902 --rc geninfo_unexecuted_blocks=1 00:25:02.902 00:25:02.902 ' 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:02.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.902 --rc genhtml_branch_coverage=1 00:25:02.902 --rc genhtml_function_coverage=1 00:25:02.902 --rc genhtml_legend=1 00:25:02.902 --rc geninfo_all_blocks=1 00:25:02.902 --rc geninfo_unexecuted_blocks=1 00:25:02.902 00:25:02.902 ' 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.902 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.161 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:03.161 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b05e44ff983e4480ab15cddb91540bcb 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.162 10:52:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:09.729 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:09.730 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:09.730 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:09.730 Found net devices under 0000:86:00.0: cvl_0_0 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:09.730 Found net devices under 0000:86:00.1: cvl_0_1 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:09.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:25:09.730 00:25:09.730 --- 10.0.0.2 ping statistics --- 00:25:09.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.730 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:09.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:25:09.730 00:25:09.730 --- 10.0.0.1 ping statistics --- 00:25:09.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.730 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=4008219 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 4008219 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 4008219 ']' 00:25:09.730 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 [2024-11-19 10:52:58.743311] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:25:09.731 [2024-11-19 10:52:58.743364] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.731 [2024-11-19 10:52:58.823222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.731 [2024-11-19 10:52:58.862767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.731 [2024-11-19 10:52:58.862800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.731 [2024-11-19 10:52:58.862810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.731 [2024-11-19 10:52:58.862816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.731 [2024-11-19 10:52:58.862821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.731 [2024-11-19 10:52:58.863379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.731 10:52:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 [2024-11-19 10:52:59.002074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 null0 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b05e44ff983e4480ab15cddb91540bcb 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 [2024-11-19 10:52:59.046347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 nvme0n1 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 [ 00:25:09.731 { 00:25:09.731 "name": "nvme0n1", 00:25:09.731 "aliases": [ 00:25:09.731 "b05e44ff-983e-4480-ab15-cddb91540bcb" 00:25:09.731 ], 00:25:09.731 "product_name": "NVMe disk", 00:25:09.731 "block_size": 512, 00:25:09.731 "num_blocks": 2097152, 00:25:09.731 "uuid": "b05e44ff-983e-4480-ab15-cddb91540bcb", 00:25:09.731 "numa_id": 1, 00:25:09.731 "assigned_rate_limits": { 00:25:09.731 "rw_ios_per_sec": 0, 00:25:09.731 "rw_mbytes_per_sec": 0, 00:25:09.731 "r_mbytes_per_sec": 0, 00:25:09.731 "w_mbytes_per_sec": 0 00:25:09.731 }, 00:25:09.731 "claimed": false, 00:25:09.731 "zoned": false, 00:25:09.731 "supported_io_types": { 00:25:09.731 "read": true, 00:25:09.731 "write": true, 00:25:09.731 "unmap": false, 00:25:09.731 "flush": true, 00:25:09.731 "reset": true, 00:25:09.731 "nvme_admin": true, 00:25:09.731 "nvme_io": true, 00:25:09.731 "nvme_io_md": false, 00:25:09.731 "write_zeroes": true, 00:25:09.731 "zcopy": false, 00:25:09.731 "get_zone_info": false, 00:25:09.731 "zone_management": false, 00:25:09.731 "zone_append": false, 00:25:09.731 "compare": true, 00:25:09.731 "compare_and_write": true, 00:25:09.731 "abort": true, 00:25:09.731 "seek_hole": false, 00:25:09.731 "seek_data": false, 00:25:09.731 "copy": true, 00:25:09.731 "nvme_iov_md": false 00:25:09.731 }, 00:25:09.731 "memory_domains": [ 00:25:09.731 { 00:25:09.731 "dma_device_id": "system", 00:25:09.731 "dma_device_type": 1 00:25:09.731 } 00:25:09.731 ], 00:25:09.731 "driver_specific": { 00:25:09.731 "nvme": [ 00:25:09.731 { 00:25:09.731 "trid": { 00:25:09.731 "trtype": "TCP", 00:25:09.731 "adrfam": "IPv4", 00:25:09.731 "traddr": "10.0.0.2", 00:25:09.731 "trsvcid": "4420", 00:25:09.731 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:09.731 }, 00:25:09.731 "ctrlr_data": { 00:25:09.731 "cntlid": 1, 00:25:09.731 "vendor_id": "0x8086", 00:25:09.731 "model_number": "SPDK bdev Controller", 00:25:09.731 "serial_number": "00000000000000000000", 00:25:09.731 "firmware_revision": "25.01", 00:25:09.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.731 "oacs": { 00:25:09.731 "security": 0, 00:25:09.731 "format": 0, 00:25:09.731 "firmware": 0, 00:25:09.731 "ns_manage": 0 00:25:09.731 }, 00:25:09.731 "multi_ctrlr": true, 00:25:09.731 "ana_reporting": false 00:25:09.731 }, 00:25:09.731 "vs": { 00:25:09.731 "nvme_version": "1.3" 00:25:09.731 }, 00:25:09.731 "ns_data": { 00:25:09.731 "id": 1, 00:25:09.731 "can_share": true 00:25:09.731 } 00:25:09.731 } 00:25:09.731 ], 00:25:09.731 "mp_policy": "active_passive" 00:25:09.731 } 00:25:09.731 } 00:25:09.731 ] 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 [2024-11-19 10:52:59.307814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:09.731 [2024-11-19 10:52:59.307867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf4220 (9): Bad file descriptor 00:25:09.731 [2024-11-19 10:52:59.439276] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.731 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 [ 00:25:09.731 { 00:25:09.731 "name": "nvme0n1", 00:25:09.731 "aliases": [ 00:25:09.731 "b05e44ff-983e-4480-ab15-cddb91540bcb" 00:25:09.731 ], 00:25:09.731 "product_name": "NVMe disk", 00:25:09.731 "block_size": 512, 00:25:09.731 "num_blocks": 2097152, 00:25:09.731 "uuid": "b05e44ff-983e-4480-ab15-cddb91540bcb", 00:25:09.731 "numa_id": 1, 00:25:09.731 "assigned_rate_limits": { 00:25:09.731 "rw_ios_per_sec": 0, 00:25:09.731 "rw_mbytes_per_sec": 0, 00:25:09.731 "r_mbytes_per_sec": 0, 00:25:09.731 "w_mbytes_per_sec": 0 00:25:09.731 }, 00:25:09.731 "claimed": false, 00:25:09.731 "zoned": false, 00:25:09.731 "supported_io_types": { 00:25:09.731 "read": true, 00:25:09.732 "write": true, 00:25:09.732 "unmap": false, 00:25:09.732 "flush": true, 00:25:09.732 "reset": true, 00:25:09.732 "nvme_admin": true, 00:25:09.732 "nvme_io": true, 00:25:09.732 "nvme_io_md": false, 00:25:09.732 "write_zeroes": true, 00:25:09.732 "zcopy": false, 00:25:09.732 "get_zone_info": false, 00:25:09.732 "zone_management": false, 00:25:09.732 "zone_append": false, 00:25:09.732 "compare": true, 00:25:09.732 "compare_and_write": true, 00:25:09.732 "abort": true, 00:25:09.732 "seek_hole": false, 00:25:09.732 "seek_data": false, 00:25:09.732 "copy": true, 00:25:09.732 "nvme_iov_md": false 00:25:09.732 }, 00:25:09.732 "memory_domains": [ 00:25:09.732 { 00:25:09.732 "dma_device_id": "system", 00:25:09.732 "dma_device_type": 1 00:25:09.732 } 00:25:09.732 ], 00:25:09.732 "driver_specific": { 00:25:09.732 "nvme": [ 00:25:09.732 { 00:25:09.732 "trid": { 00:25:09.732 "trtype": "TCP", 00:25:09.732 "adrfam": "IPv4", 00:25:09.732 "traddr": "10.0.0.2", 00:25:09.732 "trsvcid": "4420", 00:25:09.732 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:09.732 }, 00:25:09.732 "ctrlr_data": { 00:25:09.732 "cntlid": 2, 00:25:09.732 "vendor_id": "0x8086", 00:25:09.732 "model_number": "SPDK bdev Controller", 00:25:09.732 "serial_number": "00000000000000000000", 00:25:09.732 "firmware_revision": "25.01", 00:25:09.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.732 "oacs": { 00:25:09.732 "security": 0, 00:25:09.732 "format": 0, 00:25:09.732 "firmware": 0, 00:25:09.732 "ns_manage": 0 00:25:09.732 }, 00:25:09.732 "multi_ctrlr": true, 00:25:09.732 "ana_reporting": false 00:25:09.732 }, 00:25:09.732 "vs": { 00:25:09.732 "nvme_version": "1.3" 00:25:09.732 }, 00:25:09.732 "ns_data": { 00:25:09.732 "id": 1, 00:25:09.732 "can_share": true 00:25:09.732 } 00:25:09.732 } 00:25:09.732 ], 00:25:09.732 "mp_policy": "active_passive" 00:25:09.732 } 00:25:09.732 } 00:25:09.732 ] 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.QoTYk6JTLc 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.QoTYk6JTLc 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.QoTYk6JTLc 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.732 [2024-11-19 10:52:59.512434] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:09.732 [2024-11-19 10:52:59.512526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:09.732 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.992 [2024-11-19 10:52:59.528488] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:09.992 nvme0n1 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.992 [ 00:25:09.992 { 00:25:09.992 "name": "nvme0n1", 00:25:09.992 "aliases": [ 00:25:09.992 "b05e44ff-983e-4480-ab15-cddb91540bcb" 00:25:09.992 ], 00:25:09.992 "product_name": "NVMe disk", 00:25:09.992 "block_size": 512, 00:25:09.992 "num_blocks": 2097152, 00:25:09.992 "uuid": "b05e44ff-983e-4480-ab15-cddb91540bcb", 00:25:09.992 "numa_id": 1, 00:25:09.992 "assigned_rate_limits": { 00:25:09.992 "rw_ios_per_sec": 0, 00:25:09.992 "rw_mbytes_per_sec": 0, 00:25:09.992 "r_mbytes_per_sec": 0, 00:25:09.992 "w_mbytes_per_sec": 0 00:25:09.992 }, 00:25:09.992 "claimed": false, 00:25:09.992 "zoned": false, 00:25:09.992 "supported_io_types": { 00:25:09.992 "read": true, 00:25:09.992 "write": true, 00:25:09.992 "unmap": false, 00:25:09.992 "flush": true, 00:25:09.992 "reset": true, 00:25:09.992 "nvme_admin": true, 00:25:09.992 "nvme_io": true, 00:25:09.992 "nvme_io_md": false, 00:25:09.992 "write_zeroes": true, 00:25:09.992 "zcopy": false, 00:25:09.992 "get_zone_info": false, 00:25:09.992 "zone_management": false, 00:25:09.992 "zone_append": false, 00:25:09.992 "compare": true, 00:25:09.992 "compare_and_write": true, 00:25:09.992 "abort": true, 00:25:09.992 "seek_hole": false, 00:25:09.992 "seek_data": false, 00:25:09.992 "copy": true, 00:25:09.992 "nvme_iov_md": false 00:25:09.992 }, 00:25:09.992 "memory_domains": [ 00:25:09.992 { 00:25:09.992 "dma_device_id": "system", 00:25:09.992 "dma_device_type": 1 00:25:09.992 } 00:25:09.992 ], 00:25:09.992 "driver_specific": { 00:25:09.992 "nvme": [ 00:25:09.992 { 00:25:09.992 "trid": { 00:25:09.992 "trtype": "TCP", 00:25:09.992 "adrfam": "IPv4", 00:25:09.992 "traddr": "10.0.0.2", 00:25:09.992 "trsvcid": "4421", 00:25:09.992 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:09.992 }, 00:25:09.992 "ctrlr_data": { 00:25:09.992 "cntlid": 3, 00:25:09.992 "vendor_id": "0x8086", 00:25:09.992 "model_number": "SPDK bdev Controller", 00:25:09.992 "serial_number": "00000000000000000000", 00:25:09.992 "firmware_revision": "25.01", 00:25:09.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.992 "oacs": { 00:25:09.992 "security": 0, 00:25:09.992 "format": 0, 00:25:09.992 "firmware": 0, 00:25:09.992 "ns_manage": 0 00:25:09.992 }, 00:25:09.992 "multi_ctrlr": true, 00:25:09.992 "ana_reporting": false 00:25:09.992 }, 00:25:09.992 "vs": { 00:25:09.992 "nvme_version": "1.3" 00:25:09.992 }, 00:25:09.992 "ns_data": { 00:25:09.992 "id": 1, 00:25:09.992 "can_share": true 00:25:09.992 } 00:25:09.992 } 00:25:09.992 ], 00:25:09.992 "mp_policy": "active_passive" 00:25:09.992 } 00:25:09.992 } 00:25:09.992 ] 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.QoTYk6JTLc 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.992 rmmod nvme_tcp 00:25:09.992 rmmod nvme_fabrics 00:25:09.992 rmmod nvme_keyring 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 4008219 ']' 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 4008219 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 4008219 ']' 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 4008219 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4008219 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4008219' 00:25:09.992 killing process with pid 4008219 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 4008219 00:25:09.992 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 4008219 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.252 10:52:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.789 10:53:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:12.789 00:25:12.789 real 0m9.472s 00:25:12.789 user 0m3.020s 00:25:12.789 sys 0m4.866s 00:25:12.789 10:53:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.789 10:53:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:12.789 ************************************ 00:25:12.789 END TEST nvmf_async_init 00:25:12.789 ************************************ 00:25:12.789 10:53:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.790 ************************************ 00:25:12.790 START TEST dma 00:25:12.790 ************************************ 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:12.790 * Looking for test storage... 00:25:12.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:12.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.790 --rc genhtml_branch_coverage=1 00:25:12.790 --rc genhtml_function_coverage=1 00:25:12.790 --rc genhtml_legend=1 00:25:12.790 --rc geninfo_all_blocks=1 00:25:12.790 --rc geninfo_unexecuted_blocks=1 00:25:12.790 00:25:12.790 ' 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:12.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.790 --rc genhtml_branch_coverage=1 00:25:12.790 --rc genhtml_function_coverage=1 00:25:12.790 --rc genhtml_legend=1 00:25:12.790 --rc geninfo_all_blocks=1 00:25:12.790 --rc geninfo_unexecuted_blocks=1 00:25:12.790 00:25:12.790 ' 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:12.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.790 --rc genhtml_branch_coverage=1 00:25:12.790 --rc genhtml_function_coverage=1 00:25:12.790 --rc genhtml_legend=1 00:25:12.790 --rc geninfo_all_blocks=1 00:25:12.790 --rc geninfo_unexecuted_blocks=1 00:25:12.790 00:25:12.790 ' 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:12.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.790 --rc genhtml_branch_coverage=1 00:25:12.790 --rc genhtml_function_coverage=1 00:25:12.790 --rc genhtml_legend=1 00:25:12.790 --rc geninfo_all_blocks=1 00:25:12.790 --rc geninfo_unexecuted_blocks=1 00:25:12.790 00:25:12.790 ' 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.790 10:53:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:12.791 00:25:12.791 real 0m0.210s 00:25:12.791 user 0m0.123s 00:25:12.791 sys 0m0.100s 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:12.791 ************************************ 00:25:12.791 END TEST dma 00:25:12.791 ************************************ 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.791 ************************************ 00:25:12.791 START TEST nvmf_identify 00:25:12.791 ************************************ 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:12.791 * Looking for test storage... 00:25:12.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.791 --rc genhtml_branch_coverage=1 00:25:12.791 --rc genhtml_function_coverage=1 00:25:12.791 --rc genhtml_legend=1 00:25:12.791 --rc geninfo_all_blocks=1 00:25:12.791 --rc geninfo_unexecuted_blocks=1 00:25:12.791 00:25:12.791 ' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.791 --rc genhtml_branch_coverage=1 00:25:12.791 --rc genhtml_function_coverage=1 00:25:12.791 --rc genhtml_legend=1 00:25:12.791 --rc geninfo_all_blocks=1 00:25:12.791 --rc geninfo_unexecuted_blocks=1 00:25:12.791 00:25:12.791 ' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.791 --rc genhtml_branch_coverage=1 00:25:12.791 --rc genhtml_function_coverage=1 00:25:12.791 --rc genhtml_legend=1 00:25:12.791 --rc geninfo_all_blocks=1 00:25:12.791 --rc geninfo_unexecuted_blocks=1 00:25:12.791 00:25:12.791 ' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.791 --rc genhtml_branch_coverage=1 00:25:12.791 --rc genhtml_function_coverage=1 00:25:12.791 --rc genhtml_legend=1 00:25:12.791 --rc geninfo_all_blocks=1 00:25:12.791 --rc geninfo_unexecuted_blocks=1 00:25:12.791 00:25:12.791 ' 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.791 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:12.792 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:19.368 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:19.369 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:19.369 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:19.369 Found net devices under 0000:86:00.0: cvl_0_0 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:19.369 Found net devices under 0000:86:00.1: cvl_0_1 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:19.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:25:19.369 00:25:19.369 --- 10.0.0.2 ping statistics --- 00:25:19.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.369 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:25:19.369 00:25:19.369 --- 10.0.0.1 ping statistics --- 00:25:19.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.369 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4011967 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4011967 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 4011967 ']' 00:25:19.369 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 [2024-11-19 10:53:08.551439] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:25:19.370 [2024-11-19 10:53:08.551492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.370 [2024-11-19 10:53:08.630881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:19.370 [2024-11-19 10:53:08.672443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.370 [2024-11-19 10:53:08.672483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.370 [2024-11-19 10:53:08.672491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.370 [2024-11-19 10:53:08.672496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.370 [2024-11-19 10:53:08.672501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.370 [2024-11-19 10:53:08.674097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.370 [2024-11-19 10:53:08.674225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.370 [2024-11-19 10:53:08.674307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.370 [2024-11-19 10:53:08.674308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 [2024-11-19 10:53:08.782986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 Malloc0 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 [2024-11-19 10:53:08.890724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 [ 00:25:19.370 { 00:25:19.370 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:19.370 "subtype": "Discovery", 00:25:19.370 "listen_addresses": [ 00:25:19.370 { 00:25:19.370 "trtype": "TCP", 00:25:19.370 "adrfam": "IPv4", 00:25:19.370 "traddr": "10.0.0.2", 00:25:19.370 "trsvcid": "4420" 00:25:19.370 } 00:25:19.370 ], 00:25:19.370 "allow_any_host": true, 00:25:19.370 "hosts": [] 00:25:19.370 }, 00:25:19.370 { 00:25:19.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.370 "subtype": "NVMe", 00:25:19.370 "listen_addresses": [ 00:25:19.370 { 00:25:19.370 "trtype": "TCP", 00:25:19.370 "adrfam": "IPv4", 00:25:19.370 "traddr": "10.0.0.2", 00:25:19.370 "trsvcid": "4420" 00:25:19.370 } 00:25:19.370 ], 00:25:19.370 "allow_any_host": true, 00:25:19.370 "hosts": [], 00:25:19.370 "serial_number": "SPDK00000000000001", 00:25:19.370 "model_number": "SPDK bdev Controller", 00:25:19.370 "max_namespaces": 32, 00:25:19.370 "min_cntlid": 1, 00:25:19.370 "max_cntlid": 65519, 00:25:19.370 "namespaces": [ 00:25:19.370 { 00:25:19.370 "nsid": 1, 00:25:19.370 "bdev_name": "Malloc0", 00:25:19.370 "name": "Malloc0", 00:25:19.370 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:19.370 "eui64": "ABCDEF0123456789", 00:25:19.370 "uuid": "f57c83d4-6e98-4839-83a4-4f6a3e807bf6" 00:25:19.370 } 00:25:19.370 ] 00:25:19.370 } 00:25:19.370 ] 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:19.370 [2024-11-19 10:53:08.942074] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:25:19.370 [2024-11-19 10:53:08.942108] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012064 ] 00:25:19.370 [2024-11-19 10:53:08.983708] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:19.370 [2024-11-19 10:53:08.983755] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:19.370 [2024-11-19 10:53:08.983760] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:19.370 [2024-11-19 10:53:08.983770] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:19.370 [2024-11-19 10:53:08.983780] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:19.370 [2024-11-19 10:53:08.984373] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:19.370 [2024-11-19 10:53:08.984404] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c80690 0 00:25:19.370 [2024-11-19 10:53:08.998217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:19.370 [2024-11-19 10:53:08.998233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:19.371 [2024-11-19 10:53:08.998238] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:19.371 [2024-11-19 10:53:08.998240] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:19.371 [2024-11-19 10:53:08.998272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:08.998277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:08.998280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c80690) 00:25:19.371 [2024-11-19 10:53:08.998292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:19.371 [2024-11-19 10:53:08.998309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2100, cid 0, qid 0 00:25:19.371 [2024-11-19 10:53:09.006211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.371 [2024-11-19 10:53:09.006220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.371 [2024-11-19 10:53:09.006224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2100) on tqpair=0x1c80690 00:25:19.371 [2024-11-19 10:53:09.006237] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:19.371 [2024-11-19 10:53:09.006243] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:19.371 [2024-11-19 10:53:09.006248] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:19.371 [2024-11-19 10:53:09.006260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c80690) 00:25:19.371 [2024-11-19 10:53:09.006274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.371 [2024-11-19 10:53:09.006286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2100, cid 0, qid 0 00:25:19.371 [2024-11-19 10:53:09.006447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.371 [2024-11-19 10:53:09.006453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.371 [2024-11-19 10:53:09.006456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2100) on tqpair=0x1c80690 00:25:19.371 [2024-11-19 10:53:09.006464] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:19.371 [2024-11-19 10:53:09.006472] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:19.371 [2024-11-19 10:53:09.006482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c80690) 00:25:19.371 [2024-11-19 10:53:09.006495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.371 [2024-11-19 10:53:09.006505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2100, cid 0, qid 0 00:25:19.371 [2024-11-19 10:53:09.006570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.371 [2024-11-19 10:53:09.006576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.371 [2024-11-19 10:53:09.006579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2100) on tqpair=0x1c80690 00:25:19.371 [2024-11-19 10:53:09.006588] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:19.371 [2024-11-19 10:53:09.006594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:19.371 [2024-11-19 10:53:09.006600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c80690) 00:25:19.371 [2024-11-19 10:53:09.006613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.371 [2024-11-19 10:53:09.006623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2100, cid 0, qid 0 00:25:19.371 [2024-11-19 10:53:09.006686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.371 [2024-11-19 10:53:09.006692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.371 [2024-11-19 10:53:09.006695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2100) on tqpair=0x1c80690 00:25:19.371 [2024-11-19 10:53:09.006703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:19.371 [2024-11-19 10:53:09.006711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c80690) 00:25:19.371 [2024-11-19 10:53:09.006724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.371 [2024-11-19 10:53:09.006733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2100, cid 0, qid 0 00:25:19.371 [2024-11-19 10:53:09.006803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.371 [2024-11-19 10:53:09.006808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.371 [2024-11-19 10:53:09.006812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2100) on tqpair=0x1c80690 00:25:19.371 [2024-11-19 10:53:09.006819] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:19.371 [2024-11-19 10:53:09.006823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:19.371 [2024-11-19 10:53:09.006830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:19.371 [2024-11-19 10:53:09.006940] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:19.371 [2024-11-19 10:53:09.006945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:19.371 [2024-11-19 10:53:09.006952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.006958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c80690) 00:25:19.371 [2024-11-19 10:53:09.006964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.371 [2024-11-19 10:53:09.006974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2100, cid 0, qid 0 00:25:19.371 [2024-11-19 10:53:09.007039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.371 [2024-11-19 10:53:09.007045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.371 [2024-11-19 10:53:09.007048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.007051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2100) on tqpair=0x1c80690 00:25:19.371 [2024-11-19 10:53:09.007055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:19.371 [2024-11-19 10:53:09.007064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.007068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.371 [2024-11-19 10:53:09.007071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c80690) 00:25:19.372 [2024-11-19 10:53:09.007076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.372 [2024-11-19 10:53:09.007085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2100, cid 0, qid 0 00:25:19.372 [2024-11-19 10:53:09.007142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.372 [2024-11-19 10:53:09.007148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.372 [2024-11-19 10:53:09.007151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2100) on tqpair=0x1c80690 00:25:19.372 [2024-11-19 10:53:09.007158] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:19.372 [2024-11-19 10:53:09.007163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:19.372 [2024-11-19 10:53:09.007170] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:19.372 [2024-11-19 10:53:09.007177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:19.372 [2024-11-19 10:53:09.007185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c80690) 00:25:19.372 [2024-11-19 10:53:09.007194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.372 [2024-11-19 10:53:09.007210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2100, cid 0, qid 0 00:25:19.372 [2024-11-19 10:53:09.007346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.372 [2024-11-19 10:53:09.007351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.372 [2024-11-19 10:53:09.007354] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007358] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c80690): datao=0, datal=4096, cccid=0 00:25:19.372 [2024-11-19 10:53:09.007368] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce2100) on tqpair(0x1c80690): expected_datao=0, payload_size=4096 00:25:19.372 [2024-11-19 10:53:09.007372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007378] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007382] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.372 [2024-11-19 10:53:09.007395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.372 [2024-11-19 10:53:09.007398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2100) on tqpair=0x1c80690 00:25:19.372 [2024-11-19 10:53:09.007408] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:19.372 [2024-11-19 10:53:09.007413] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:19.372 [2024-11-19 10:53:09.007417] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:19.372 [2024-11-19 10:53:09.007425] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:19.372 [2024-11-19 10:53:09.007429] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:19.372 [2024-11-19 10:53:09.007433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:19.372 [2024-11-19 10:53:09.007443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:19.372 [2024-11-19 10:53:09.007449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c80690) 00:25:19.372 [2024-11-19 10:53:09.007463] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:19.372 [2024-11-19 10:53:09.007474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2100, cid 0, qid 0 00:25:19.372 [2024-11-19 10:53:09.007545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.372 [2024-11-19 10:53:09.007551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.372 [2024-11-19 10:53:09.007554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2100) on tqpair=0x1c80690 00:25:19.372 [2024-11-19 10:53:09.007564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c80690) 00:25:19.372 [2024-11-19 10:53:09.007576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.372 [2024-11-19 10:53:09.007581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c80690) 00:25:19.372 [2024-11-19 10:53:09.007593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.372 [2024-11-19 10:53:09.007598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c80690) 00:25:19.372 [2024-11-19 10:53:09.007611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.372 [2024-11-19 10:53:09.007617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.372 [2024-11-19 10:53:09.007628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.372 [2024-11-19 10:53:09.007632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:19.372 [2024-11-19 10:53:09.007640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:19.372 [2024-11-19 10:53:09.007646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c80690) 00:25:19.372 [2024-11-19 10:53:09.007654] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.372 [2024-11-19 10:53:09.007666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2100, cid 0, qid 0 00:25:19.372 [2024-11-19 10:53:09.007670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2280, cid 1, qid 0 00:25:19.372 [2024-11-19 10:53:09.007675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2400, cid 2, qid 0 00:25:19.372 [2024-11-19 10:53:09.007678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.372 [2024-11-19 10:53:09.007682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2700, cid 4, qid 0 00:25:19.372 [2024-11-19 10:53:09.007785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.372 [2024-11-19 10:53:09.007791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.372 [2024-11-19 10:53:09.007794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2700) on tqpair=0x1c80690 00:25:19.372 [2024-11-19 10:53:09.007805] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:19.372 [2024-11-19 10:53:09.007809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:19.372 [2024-11-19 10:53:09.007818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.372 [2024-11-19 10:53:09.007822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c80690) 00:25:19.372 [2024-11-19 10:53:09.007827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.372 [2024-11-19 10:53:09.007837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2700, cid 4, qid 0 00:25:19.373 [2024-11-19 10:53:09.007914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.373 [2024-11-19 10:53:09.007920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.373 [2024-11-19 10:53:09.007923] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.007926] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c80690): datao=0, datal=4096, cccid=4 00:25:19.373 [2024-11-19 10:53:09.007930] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce2700) on tqpair(0x1c80690): expected_datao=0, payload_size=4096 00:25:19.373 [2024-11-19 10:53:09.007935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.007945] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.007949] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.048372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.373 [2024-11-19 10:53:09.048383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.373 [2024-11-19 10:53:09.048386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.048390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2700) on tqpair=0x1c80690 00:25:19.373 [2024-11-19 10:53:09.048402] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:19.373 [2024-11-19 10:53:09.048423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.048427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c80690) 00:25:19.373 [2024-11-19 10:53:09.048435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.373 [2024-11-19 10:53:09.048441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.048445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.048447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c80690) 00:25:19.373 [2024-11-19 10:53:09.048453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.373 [2024-11-19 10:53:09.048468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2700, cid 4, qid 0 00:25:19.373 [2024-11-19 10:53:09.048473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2880, cid 5, qid 0 00:25:19.373 [2024-11-19 10:53:09.048570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.373 [2024-11-19 10:53:09.048576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.373 [2024-11-19 10:53:09.048579] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.048583] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c80690): datao=0, datal=1024, cccid=4 00:25:19.373 [2024-11-19 10:53:09.048586] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce2700) on tqpair(0x1c80690): expected_datao=0, payload_size=1024 00:25:19.373 [2024-11-19 10:53:09.048590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.048595] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.048599] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.048603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.373 [2024-11-19 10:53:09.048608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.373 [2024-11-19 10:53:09.048611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.048615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2880) on tqpair=0x1c80690 00:25:19.373 [2024-11-19 10:53:09.092210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.373 [2024-11-19 10:53:09.092221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.373 [2024-11-19 10:53:09.092225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.092228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2700) on tqpair=0x1c80690 00:25:19.373 [2024-11-19 10:53:09.092238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.092242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c80690) 00:25:19.373 [2024-11-19 10:53:09.092250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.373 [2024-11-19 10:53:09.092266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2700, cid 4, qid 0 00:25:19.373 [2024-11-19 10:53:09.092425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.373 [2024-11-19 10:53:09.092432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.373 [2024-11-19 10:53:09.092435] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.092438] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c80690): datao=0, datal=3072, cccid=4 00:25:19.373 [2024-11-19 10:53:09.092442] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce2700) on tqpair(0x1c80690): expected_datao=0, payload_size=3072 00:25:19.373 [2024-11-19 10:53:09.092445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.092451] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.092454] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.092464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.373 [2024-11-19 10:53:09.092469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.373 [2024-11-19 10:53:09.092472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.092475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2700) on tqpair=0x1c80690 00:25:19.373 [2024-11-19 10:53:09.092483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.092486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c80690) 00:25:19.373 [2024-11-19 10:53:09.092492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.373 [2024-11-19 10:53:09.092505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2700, cid 4, qid 0 00:25:19.373 [2024-11-19 10:53:09.092575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.373 [2024-11-19 10:53:09.092580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.373 [2024-11-19 10:53:09.092583] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.092586] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c80690): datao=0, datal=8, cccid=4 00:25:19.373 [2024-11-19 10:53:09.092590] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce2700) on tqpair(0x1c80690): expected_datao=0, payload_size=8 00:25:19.373 [2024-11-19 10:53:09.092594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.092599] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.092603] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.133337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.373 [2024-11-19 10:53:09.133346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.373 [2024-11-19 10:53:09.133350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.373 [2024-11-19 10:53:09.133353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2700) on tqpair=0x1c80690 00:25:19.373 ===================================================== 00:25:19.373 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:19.373 ===================================================== 00:25:19.373 Controller Capabilities/Features 00:25:19.373 ================================ 00:25:19.373 Vendor ID: 0000 00:25:19.373 Subsystem Vendor ID: 0000 00:25:19.373 Serial Number: .................... 00:25:19.373 Model Number: ........................................ 00:25:19.373 Firmware Version: 25.01 00:25:19.373 Recommended Arb Burst: 0 00:25:19.373 IEEE OUI Identifier: 00 00 00 00:25:19.373 Multi-path I/O 00:25:19.373 May have multiple subsystem ports: No 00:25:19.373 May have multiple controllers: No 00:25:19.373 Associated with SR-IOV VF: No 00:25:19.373 Max Data Transfer Size: 131072 00:25:19.373 Max Number of Namespaces: 0 00:25:19.373 Max Number of I/O Queues: 1024 00:25:19.373 NVMe Specification Version (VS): 1.3 00:25:19.373 NVMe Specification Version (Identify): 1.3 00:25:19.373 Maximum Queue Entries: 128 00:25:19.373 Contiguous Queues Required: Yes 00:25:19.373 Arbitration Mechanisms Supported 00:25:19.373 Weighted Round Robin: Not Supported 00:25:19.373 Vendor Specific: Not Supported 00:25:19.373 Reset Timeout: 15000 ms 00:25:19.373 Doorbell Stride: 4 bytes 00:25:19.373 NVM Subsystem Reset: Not Supported 00:25:19.373 Command Sets Supported 00:25:19.374 NVM Command Set: Supported 00:25:19.374 Boot Partition: Not Supported 00:25:19.374 Memory Page Size Minimum: 4096 bytes 00:25:19.374 Memory Page Size Maximum: 4096 bytes 00:25:19.374 Persistent Memory Region: Not Supported 00:25:19.374 Optional Asynchronous Events Supported 00:25:19.374 Namespace Attribute Notices: Not Supported 00:25:19.374 Firmware Activation Notices: Not Supported 00:25:19.374 ANA Change Notices: Not Supported 00:25:19.374 PLE Aggregate Log Change Notices: Not Supported 00:25:19.374 LBA Status Info Alert Notices: Not Supported 00:25:19.374 EGE Aggregate Log Change Notices: Not Supported 00:25:19.374 Normal NVM Subsystem Shutdown event: Not Supported 00:25:19.374 Zone Descriptor Change Notices: Not Supported 00:25:19.374 Discovery Log Change Notices: Supported 00:25:19.374 Controller Attributes 00:25:19.374 128-bit Host Identifier: Not Supported 00:25:19.374 Non-Operational Permissive Mode: Not Supported 00:25:19.374 NVM Sets: Not Supported 00:25:19.374 Read Recovery Levels: Not Supported 00:25:19.374 Endurance Groups: Not Supported 00:25:19.374 Predictable Latency Mode: Not Supported 00:25:19.374 Traffic Based Keep ALive: Not Supported 00:25:19.374 Namespace Granularity: Not Supported 00:25:19.374 SQ Associations: Not Supported 00:25:19.374 UUID List: Not Supported 00:25:19.374 Multi-Domain Subsystem: Not Supported 00:25:19.374 Fixed Capacity Management: Not Supported 00:25:19.374 Variable Capacity Management: Not Supported 00:25:19.374 Delete Endurance Group: Not Supported 00:25:19.374 Delete NVM Set: Not Supported 00:25:19.374 Extended LBA Formats Supported: Not Supported 00:25:19.374 Flexible Data Placement Supported: Not Supported 00:25:19.374 00:25:19.374 Controller Memory Buffer Support 00:25:19.374 ================================ 00:25:19.374 Supported: No 00:25:19.374 00:25:19.374 Persistent Memory Region Support 00:25:19.374 ================================ 00:25:19.374 Supported: No 00:25:19.374 00:25:19.374 Admin Command Set Attributes 00:25:19.374 ============================ 00:25:19.374 Security Send/Receive: Not Supported 00:25:19.374 Format NVM: Not Supported 00:25:19.374 Firmware Activate/Download: Not Supported 00:25:19.374 Namespace Management: Not Supported 00:25:19.374 Device Self-Test: Not Supported 00:25:19.374 Directives: Not Supported 00:25:19.374 NVMe-MI: Not Supported 00:25:19.374 Virtualization Management: Not Supported 00:25:19.374 Doorbell Buffer Config: Not Supported 00:25:19.374 Get LBA Status Capability: Not Supported 00:25:19.374 Command & Feature Lockdown Capability: Not Supported 00:25:19.374 Abort Command Limit: 1 00:25:19.374 Async Event Request Limit: 4 00:25:19.374 Number of Firmware Slots: N/A 00:25:19.374 Firmware Slot 1 Read-Only: N/A 00:25:19.374 Firmware Activation Without Reset: N/A 00:25:19.374 Multiple Update Detection Support: N/A 00:25:19.374 Firmware Update Granularity: No Information Provided 00:25:19.374 Per-Namespace SMART Log: No 00:25:19.374 Asymmetric Namespace Access Log Page: Not Supported 00:25:19.374 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:19.374 Command Effects Log Page: Not Supported 00:25:19.374 Get Log Page Extended Data: Supported 00:25:19.374 Telemetry Log Pages: Not Supported 00:25:19.374 Persistent Event Log Pages: Not Supported 00:25:19.374 Supported Log Pages Log Page: May Support 00:25:19.374 Commands Supported & Effects Log Page: Not Supported 00:25:19.374 Feature Identifiers & Effects Log Page:May Support 00:25:19.374 NVMe-MI Commands & Effects Log Page: May Support 00:25:19.374 Data Area 4 for Telemetry Log: Not Supported 00:25:19.374 Error Log Page Entries Supported: 128 00:25:19.374 Keep Alive: Not Supported 00:25:19.374 00:25:19.374 NVM Command Set Attributes 00:25:19.374 ========================== 00:25:19.374 Submission Queue Entry Size 00:25:19.374 Max: 1 00:25:19.374 Min: 1 00:25:19.374 Completion Queue Entry Size 00:25:19.374 Max: 1 00:25:19.374 Min: 1 00:25:19.374 Number of Namespaces: 0 00:25:19.374 Compare Command: Not Supported 00:25:19.374 Write Uncorrectable Command: Not Supported 00:25:19.374 Dataset Management Command: Not Supported 00:25:19.374 Write Zeroes Command: Not Supported 00:25:19.374 Set Features Save Field: Not Supported 00:25:19.374 Reservations: Not Supported 00:25:19.374 Timestamp: Not Supported 00:25:19.374 Copy: Not Supported 00:25:19.374 Volatile Write Cache: Not Present 00:25:19.374 Atomic Write Unit (Normal): 1 00:25:19.374 Atomic Write Unit (PFail): 1 00:25:19.374 Atomic Compare & Write Unit: 1 00:25:19.374 Fused Compare & Write: Supported 00:25:19.374 Scatter-Gather List 00:25:19.374 SGL Command Set: Supported 00:25:19.374 SGL Keyed: Supported 00:25:19.374 SGL Bit Bucket Descriptor: Not Supported 00:25:19.374 SGL Metadata Pointer: Not Supported 00:25:19.374 Oversized SGL: Not Supported 00:25:19.374 SGL Metadata Address: Not Supported 00:25:19.374 SGL Offset: Supported 00:25:19.374 Transport SGL Data Block: Not Supported 00:25:19.374 Replay Protected Memory Block: Not Supported 00:25:19.374 00:25:19.374 Firmware Slot Information 00:25:19.374 ========================= 00:25:19.374 Active slot: 0 00:25:19.374 00:25:19.374 00:25:19.374 Error Log 00:25:19.374 ========= 00:25:19.374 00:25:19.374 Active Namespaces 00:25:19.374 ================= 00:25:19.374 Discovery Log Page 00:25:19.374 ================== 00:25:19.374 Generation Counter: 2 00:25:19.374 Number of Records: 2 00:25:19.374 Record Format: 0 00:25:19.374 00:25:19.374 Discovery Log Entry 0 00:25:19.374 ---------------------- 00:25:19.374 Transport Type: 3 (TCP) 00:25:19.374 Address Family: 1 (IPv4) 00:25:19.374 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:19.374 Entry Flags: 00:25:19.374 Duplicate Returned Information: 1 00:25:19.374 Explicit Persistent Connection Support for Discovery: 1 00:25:19.374 Transport Requirements: 00:25:19.374 Secure Channel: Not Required 00:25:19.374 Port ID: 0 (0x0000) 00:25:19.374 Controller ID: 65535 (0xffff) 00:25:19.374 Admin Max SQ Size: 128 00:25:19.374 Transport Service Identifier: 4420 00:25:19.375 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:19.375 Transport Address: 10.0.0.2 00:25:19.375 Discovery Log Entry 1 00:25:19.375 ---------------------- 00:25:19.375 Transport Type: 3 (TCP) 00:25:19.375 Address Family: 1 (IPv4) 00:25:19.375 Subsystem Type: 2 (NVM Subsystem) 00:25:19.375 Entry Flags: 00:25:19.375 Duplicate Returned Information: 0 00:25:19.375 Explicit Persistent Connection Support for Discovery: 0 00:25:19.375 Transport Requirements: 00:25:19.375 Secure Channel: Not Required 00:25:19.375 Port ID: 0 (0x0000) 00:25:19.375 Controller ID: 65535 (0xffff) 00:25:19.375 Admin Max SQ Size: 128 00:25:19.375 Transport Service Identifier: 4420 00:25:19.375 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:19.375 Transport Address: 10.0.0.2 [2024-11-19 10:53:09.133434] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:19.375 [2024-11-19 10:53:09.133445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2100) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.133451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.375 [2024-11-19 10:53:09.133456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2280) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.133460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.375 [2024-11-19 10:53:09.133464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2400) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.133468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.375 [2024-11-19 10:53:09.133474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.133478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.375 [2024-11-19 10:53:09.133488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.375 [2024-11-19 10:53:09.133501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.375 [2024-11-19 10:53:09.133515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.375 [2024-11-19 10:53:09.133573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.375 [2024-11-19 10:53:09.133578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.375 [2024-11-19 10:53:09.133581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.133590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.375 [2024-11-19 10:53:09.133602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.375 [2024-11-19 10:53:09.133614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.375 [2024-11-19 10:53:09.133685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.375 [2024-11-19 10:53:09.133691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.375 [2024-11-19 10:53:09.133694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.133701] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:19.375 [2024-11-19 10:53:09.133705] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:19.375 [2024-11-19 10:53:09.133713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.375 [2024-11-19 10:53:09.133725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.375 [2024-11-19 10:53:09.133734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.375 [2024-11-19 10:53:09.133797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.375 [2024-11-19 10:53:09.133802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.375 [2024-11-19 10:53:09.133805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.133817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.375 [2024-11-19 10:53:09.133829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.375 [2024-11-19 10:53:09.133840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.375 [2024-11-19 10:53:09.133909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.375 [2024-11-19 10:53:09.133914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.375 [2024-11-19 10:53:09.133917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.133928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.133935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.375 [2024-11-19 10:53:09.133940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.375 [2024-11-19 10:53:09.133950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.375 [2024-11-19 10:53:09.134014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.375 [2024-11-19 10:53:09.134019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.375 [2024-11-19 10:53:09.134022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.134026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.134035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.134038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.134041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.375 [2024-11-19 10:53:09.134047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.375 [2024-11-19 10:53:09.134056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.375 [2024-11-19 10:53:09.134115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.375 [2024-11-19 10:53:09.134121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.375 [2024-11-19 10:53:09.134124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.134127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.134134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.134138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.134141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.375 [2024-11-19 10:53:09.134147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.375 [2024-11-19 10:53:09.134156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.375 [2024-11-19 10:53:09.134224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.375 [2024-11-19 10:53:09.134230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.375 [2024-11-19 10:53:09.134232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.134236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.134243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.134247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.134250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.375 [2024-11-19 10:53:09.134256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.375 [2024-11-19 10:53:09.134266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.375 [2024-11-19 10:53:09.134332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.375 [2024-11-19 10:53:09.134338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.375 [2024-11-19 10:53:09.134341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.134344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.375 [2024-11-19 10:53:09.134352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.375 [2024-11-19 10:53:09.134356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.376 [2024-11-19 10:53:09.134364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.376 [2024-11-19 10:53:09.134374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.376 [2024-11-19 10:53:09.134452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.376 [2024-11-19 10:53:09.134457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.376 [2024-11-19 10:53:09.134460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.376 [2024-11-19 10:53:09.134472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.376 [2024-11-19 10:53:09.134484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.376 [2024-11-19 10:53:09.134493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.376 [2024-11-19 10:53:09.134553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.376 [2024-11-19 10:53:09.134559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.376 [2024-11-19 10:53:09.134562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.376 [2024-11-19 10:53:09.134573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.376 [2024-11-19 10:53:09.134585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.376 [2024-11-19 10:53:09.134594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.376 [2024-11-19 10:53:09.134651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.376 [2024-11-19 10:53:09.134656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.376 [2024-11-19 10:53:09.134659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.376 [2024-11-19 10:53:09.134670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.376 [2024-11-19 10:53:09.134682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.376 [2024-11-19 10:53:09.134692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.376 [2024-11-19 10:53:09.134748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.376 [2024-11-19 10:53:09.134756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.376 [2024-11-19 10:53:09.134759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.376 [2024-11-19 10:53:09.134770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.376 [2024-11-19 10:53:09.134782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.376 [2024-11-19 10:53:09.134791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.376 [2024-11-19 10:53:09.134851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.376 [2024-11-19 10:53:09.134857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.376 [2024-11-19 10:53:09.134859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.376 [2024-11-19 10:53:09.134871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.376 [2024-11-19 10:53:09.134883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.376 [2024-11-19 10:53:09.134893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.376 [2024-11-19 10:53:09.134952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.376 [2024-11-19 10:53:09.134958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.376 [2024-11-19 10:53:09.134960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.376 [2024-11-19 10:53:09.134971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.134978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.376 [2024-11-19 10:53:09.134983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.376 [2024-11-19 10:53:09.134993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.376 [2024-11-19 10:53:09.135051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.376 [2024-11-19 10:53:09.135057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.376 [2024-11-19 10:53:09.135060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.135063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.376 [2024-11-19 10:53:09.135071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.135074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.135077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.376 [2024-11-19 10:53:09.135083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.376 [2024-11-19 10:53:09.135092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.376 [2024-11-19 10:53:09.135150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.376 [2024-11-19 10:53:09.135156] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.376 [2024-11-19 10:53:09.135160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.135163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.376 [2024-11-19 10:53:09.135171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.135175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.135178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.376 [2024-11-19 10:53:09.135183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.376 [2024-11-19 10:53:09.135193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.376 [2024-11-19 10:53:09.135259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.376 [2024-11-19 10:53:09.135265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.376 [2024-11-19 10:53:09.135268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.135271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.376 [2024-11-19 10:53:09.135279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.135283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.376 [2024-11-19 10:53:09.135286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.376 [2024-11-19 10:53:09.135291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.377 [2024-11-19 10:53:09.135301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.377 [2024-11-19 10:53:09.135368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.377 [2024-11-19 10:53:09.135373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.377 [2024-11-19 10:53:09.135376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.377 [2024-11-19 10:53:09.135388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.377 [2024-11-19 10:53:09.135400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.377 [2024-11-19 10:53:09.135409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.377 [2024-11-19 10:53:09.135469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.377 [2024-11-19 10:53:09.135474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.377 [2024-11-19 10:53:09.135477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.377 [2024-11-19 10:53:09.135488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.377 [2024-11-19 10:53:09.135500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.377 [2024-11-19 10:53:09.135510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.377 [2024-11-19 10:53:09.135575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.377 [2024-11-19 10:53:09.135581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.377 [2024-11-19 10:53:09.135584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.377 [2024-11-19 10:53:09.135597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.377 [2024-11-19 10:53:09.135610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.377 [2024-11-19 10:53:09.135619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.377 [2024-11-19 10:53:09.135692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.377 [2024-11-19 10:53:09.135697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.377 [2024-11-19 10:53:09.135700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.377 [2024-11-19 10:53:09.135712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.377 [2024-11-19 10:53:09.135724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.377 [2024-11-19 10:53:09.135733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.377 [2024-11-19 10:53:09.135796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.377 [2024-11-19 10:53:09.135802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.377 [2024-11-19 10:53:09.135805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.377 [2024-11-19 10:53:09.135816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.135822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.377 [2024-11-19 10:53:09.135828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.377 [2024-11-19 10:53:09.135837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.377 [2024-11-19 10:53:09.139208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.377 [2024-11-19 10:53:09.139215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.377 [2024-11-19 10:53:09.139218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.139222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.377 [2024-11-19 10:53:09.139231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.139235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.139238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c80690) 00:25:19.377 [2024-11-19 10:53:09.139244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.377 [2024-11-19 10:53:09.139255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce2580, cid 3, qid 0 00:25:19.377 [2024-11-19 10:53:09.139407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.377 [2024-11-19 10:53:09.139413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.377 [2024-11-19 10:53:09.139416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.377 [2024-11-19 10:53:09.139419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce2580) on tqpair=0x1c80690 00:25:19.377 [2024-11-19 10:53:09.139428] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:25:19.377 00:25:19.377 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:19.639 [2024-11-19 10:53:09.177538] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:25:19.639 [2024-11-19 10:53:09.177582] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012072 ] 00:25:19.639 [2024-11-19 10:53:09.217388] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:19.639 [2024-11-19 10:53:09.217429] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:19.639 [2024-11-19 10:53:09.217434] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:19.639 [2024-11-19 10:53:09.217444] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:19.639 [2024-11-19 10:53:09.217452] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:19.639 [2024-11-19 10:53:09.221386] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:19.639 [2024-11-19 10:53:09.221411] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb39690 0 00:25:19.639 [2024-11-19 10:53:09.229217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:19.639 [2024-11-19 10:53:09.229231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:19.639 [2024-11-19 10:53:09.229235] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:19.639 [2024-11-19 10:53:09.229238] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:19.639 [2024-11-19 10:53:09.229263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.229268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.229271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb39690) 00:25:19.639 [2024-11-19 10:53:09.229281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:19.639 [2024-11-19 10:53:09.229296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b100, cid 0, qid 0 00:25:19.639 [2024-11-19 10:53:09.237212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.639 [2024-11-19 10:53:09.237220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.639 [2024-11-19 10:53:09.237223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.237227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b100) on tqpair=0xb39690 00:25:19.639 [2024-11-19 10:53:09.237238] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:19.639 [2024-11-19 10:53:09.237244] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:19.639 [2024-11-19 10:53:09.237248] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:19.639 [2024-11-19 10:53:09.237259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.237262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.237266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb39690) 00:25:19.639 [2024-11-19 10:53:09.237272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.639 [2024-11-19 10:53:09.237287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b100, cid 0, qid 0 00:25:19.639 [2024-11-19 10:53:09.237436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.639 [2024-11-19 10:53:09.237442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.639 [2024-11-19 10:53:09.237445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.237449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b100) on tqpair=0xb39690 00:25:19.639 [2024-11-19 10:53:09.237453] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:19.639 [2024-11-19 10:53:09.237459] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:19.639 [2024-11-19 10:53:09.237466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.237469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.237472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb39690) 00:25:19.639 [2024-11-19 10:53:09.237478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.639 [2024-11-19 10:53:09.237488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b100, cid 0, qid 0 00:25:19.639 [2024-11-19 10:53:09.237551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.639 [2024-11-19 10:53:09.237556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.639 [2024-11-19 10:53:09.237559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.237562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b100) on tqpair=0xb39690 00:25:19.639 [2024-11-19 10:53:09.237567] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:19.639 [2024-11-19 10:53:09.237573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:19.639 [2024-11-19 10:53:09.237579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.237583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.237586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb39690) 00:25:19.639 [2024-11-19 10:53:09.237591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.639 [2024-11-19 10:53:09.237600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b100, cid 0, qid 0 00:25:19.639 [2024-11-19 10:53:09.237660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.639 [2024-11-19 10:53:09.237666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.639 [2024-11-19 10:53:09.237669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.237672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b100) on tqpair=0xb39690 00:25:19.639 [2024-11-19 10:53:09.237676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:19.639 [2024-11-19 10:53:09.237685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.639 [2024-11-19 10:53:09.237688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.237691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb39690) 00:25:19.640 [2024-11-19 10:53:09.237697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.640 [2024-11-19 10:53:09.237707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b100, cid 0, qid 0 00:25:19.640 [2024-11-19 10:53:09.237766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.640 [2024-11-19 10:53:09.237774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.640 [2024-11-19 10:53:09.237777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.237780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b100) on tqpair=0xb39690 00:25:19.640 [2024-11-19 10:53:09.237784] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:19.640 [2024-11-19 10:53:09.237788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:19.640 [2024-11-19 10:53:09.237795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:19.640 [2024-11-19 10:53:09.237902] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:19.640 [2024-11-19 10:53:09.237906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:19.640 [2024-11-19 10:53:09.237913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.237916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.237919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb39690) 00:25:19.640 [2024-11-19 10:53:09.237925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.640 [2024-11-19 10:53:09.237934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b100, cid 0, qid 0 00:25:19.640 [2024-11-19 10:53:09.237998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.640 [2024-11-19 10:53:09.238004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.640 [2024-11-19 10:53:09.238007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b100) on tqpair=0xb39690 00:25:19.640 [2024-11-19 10:53:09.238014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:19.640 [2024-11-19 10:53:09.238021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb39690) 00:25:19.640 [2024-11-19 10:53:09.238034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.640 [2024-11-19 10:53:09.238043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b100, cid 0, qid 0 00:25:19.640 [2024-11-19 10:53:09.238102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.640 [2024-11-19 10:53:09.238108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.640 [2024-11-19 10:53:09.238111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b100) on tqpair=0xb39690 00:25:19.640 [2024-11-19 10:53:09.238118] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:19.640 [2024-11-19 10:53:09.238122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:19.640 [2024-11-19 10:53:09.238128] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:19.640 [2024-11-19 10:53:09.238135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:19.640 [2024-11-19 10:53:09.238142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb39690) 00:25:19.640 [2024-11-19 10:53:09.238155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.640 [2024-11-19 10:53:09.238164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b100, cid 0, qid 0 00:25:19.640 [2024-11-19 10:53:09.238263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.640 [2024-11-19 10:53:09.238269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.640 [2024-11-19 10:53:09.238272] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238275] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb39690): datao=0, datal=4096, cccid=0 00:25:19.640 [2024-11-19 10:53:09.238279] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9b100) on tqpair(0xb39690): expected_datao=0, payload_size=4096 00:25:19.640 [2024-11-19 10:53:09.238283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238289] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238292] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.640 [2024-11-19 10:53:09.238323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.640 [2024-11-19 10:53:09.238326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b100) on tqpair=0xb39690 00:25:19.640 [2024-11-19 10:53:09.238335] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:19.640 [2024-11-19 10:53:09.238340] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:19.640 [2024-11-19 10:53:09.238343] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:19.640 [2024-11-19 10:53:09.238349] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:19.640 [2024-11-19 10:53:09.238353] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:19.640 [2024-11-19 10:53:09.238357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:19.640 [2024-11-19 10:53:09.238367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:19.640 [2024-11-19 10:53:09.238372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb39690) 00:25:19.640 [2024-11-19 10:53:09.238385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:19.640 [2024-11-19 10:53:09.238395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b100, cid 0, qid 0 00:25:19.640 [2024-11-19 10:53:09.238457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.640 [2024-11-19 10:53:09.238462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.640 [2024-11-19 10:53:09.238465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b100) on tqpair=0xb39690 00:25:19.640 [2024-11-19 10:53:09.238474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb39690) 00:25:19.640 [2024-11-19 10:53:09.238485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.640 [2024-11-19 10:53:09.238492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb39690) 00:25:19.640 [2024-11-19 10:53:09.238504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.640 [2024-11-19 10:53:09.238509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb39690) 00:25:19.640 [2024-11-19 10:53:09.238520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.640 [2024-11-19 10:53:09.238525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb39690) 00:25:19.640 [2024-11-19 10:53:09.238536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.640 [2024-11-19 10:53:09.238540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:19.640 [2024-11-19 10:53:09.238548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:19.640 [2024-11-19 10:53:09.238553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.640 [2024-11-19 10:53:09.238556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb39690) 00:25:19.640 [2024-11-19 10:53:09.238562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.640 [2024-11-19 10:53:09.238572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b100, cid 0, qid 0 00:25:19.640 [2024-11-19 10:53:09.238577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b280, cid 1, qid 0 00:25:19.640 [2024-11-19 10:53:09.238581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b400, cid 2, qid 0 00:25:19.640 [2024-11-19 10:53:09.238585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b580, cid 3, qid 0 00:25:19.640 [2024-11-19 10:53:09.238589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b700, cid 4, qid 0 00:25:19.640 [2024-11-19 10:53:09.238687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.640 [2024-11-19 10:53:09.238693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.640 [2024-11-19 10:53:09.238696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.238699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b700) on tqpair=0xb39690 00:25:19.641 [2024-11-19 10:53:09.238705] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:19.641 [2024-11-19 10:53:09.238709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.238716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.238721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.238726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.238731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.238734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb39690) 00:25:19.641 [2024-11-19 10:53:09.238740] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:19.641 [2024-11-19 10:53:09.238749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b700, cid 4, qid 0 00:25:19.641 [2024-11-19 10:53:09.238813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.641 [2024-11-19 10:53:09.238818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.641 [2024-11-19 10:53:09.238821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.238824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b700) on tqpair=0xb39690 00:25:19.641 [2024-11-19 10:53:09.238875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.238884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.238890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.238894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb39690) 00:25:19.641 [2024-11-19 10:53:09.238899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.641 [2024-11-19 10:53:09.238909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b700, cid 4, qid 0 00:25:19.641 [2024-11-19 10:53:09.238980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.641 [2024-11-19 10:53:09.238986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.641 [2024-11-19 10:53:09.238989] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.238992] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb39690): datao=0, datal=4096, cccid=4 00:25:19.641 [2024-11-19 10:53:09.238996] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9b700) on tqpair(0xb39690): expected_datao=0, payload_size=4096 00:25:19.641 [2024-11-19 10:53:09.238999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.239011] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.239015] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.641 [2024-11-19 10:53:09.279356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.641 [2024-11-19 10:53:09.279359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b700) on tqpair=0xb39690 00:25:19.641 [2024-11-19 10:53:09.279371] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:19.641 [2024-11-19 10:53:09.279383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.279391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.279397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb39690) 00:25:19.641 [2024-11-19 10:53:09.279407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.641 [2024-11-19 10:53:09.279418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b700, cid 4, qid 0 00:25:19.641 [2024-11-19 10:53:09.279498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.641 [2024-11-19 10:53:09.279507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.641 [2024-11-19 10:53:09.279510] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279513] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb39690): datao=0, datal=4096, cccid=4 00:25:19.641 [2024-11-19 10:53:09.279517] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9b700) on tqpair(0xb39690): expected_datao=0, payload_size=4096 00:25:19.641 [2024-11-19 10:53:09.279520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279526] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279529] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.641 [2024-11-19 10:53:09.279547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.641 [2024-11-19 10:53:09.279550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b700) on tqpair=0xb39690 00:25:19.641 [2024-11-19 10:53:09.279563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.279571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.279577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb39690) 00:25:19.641 [2024-11-19 10:53:09.279585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.641 [2024-11-19 10:53:09.279596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b700, cid 4, qid 0 00:25:19.641 [2024-11-19 10:53:09.279664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.641 [2024-11-19 10:53:09.279670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.641 [2024-11-19 10:53:09.279673] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279676] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb39690): datao=0, datal=4096, cccid=4 00:25:19.641 [2024-11-19 10:53:09.279680] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9b700) on tqpair(0xb39690): expected_datao=0, payload_size=4096 00:25:19.641 [2024-11-19 10:53:09.279683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279689] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279692] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.641 [2024-11-19 10:53:09.279709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.641 [2024-11-19 10:53:09.279712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b700) on tqpair=0xb39690 00:25:19.641 [2024-11-19 10:53:09.279721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.279728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.279735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.279740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.279744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.279750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.279755] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:19.641 [2024-11-19 10:53:09.279759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:19.641 [2024-11-19 10:53:09.279763] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:19.641 [2024-11-19 10:53:09.279774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb39690) 00:25:19.641 [2024-11-19 10:53:09.279783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.641 [2024-11-19 10:53:09.279789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb39690) 00:25:19.641 [2024-11-19 10:53:09.279800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.641 [2024-11-19 10:53:09.279812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b700, cid 4, qid 0 00:25:19.641 [2024-11-19 10:53:09.279816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b880, cid 5, qid 0 00:25:19.641 [2024-11-19 10:53:09.279893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.641 [2024-11-19 10:53:09.279898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.641 [2024-11-19 10:53:09.279901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b700) on tqpair=0xb39690 00:25:19.641 [2024-11-19 10:53:09.279909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.641 [2024-11-19 10:53:09.279914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.641 [2024-11-19 10:53:09.279917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b880) on tqpair=0xb39690 00:25:19.641 [2024-11-19 10:53:09.279928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.641 [2024-11-19 10:53:09.279932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb39690) 00:25:19.642 [2024-11-19 10:53:09.279937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.642 [2024-11-19 10:53:09.279946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b880, cid 5, qid 0 00:25:19.642 [2024-11-19 10:53:09.280010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.642 [2024-11-19 10:53:09.280015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.642 [2024-11-19 10:53:09.280018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b880) on tqpair=0xb39690 00:25:19.642 [2024-11-19 10:53:09.280029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb39690) 00:25:19.642 [2024-11-19 10:53:09.280037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.642 [2024-11-19 10:53:09.280046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b880, cid 5, qid 0 00:25:19.642 [2024-11-19 10:53:09.280107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.642 [2024-11-19 10:53:09.280112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.642 [2024-11-19 10:53:09.280116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b880) on tqpair=0xb39690 00:25:19.642 [2024-11-19 10:53:09.280126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb39690) 00:25:19.642 [2024-11-19 10:53:09.280135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.642 [2024-11-19 10:53:09.280144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b880, cid 5, qid 0 00:25:19.642 [2024-11-19 10:53:09.280212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.642 [2024-11-19 10:53:09.280218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.642 [2024-11-19 10:53:09.280221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b880) on tqpair=0xb39690 00:25:19.642 [2024-11-19 10:53:09.280235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb39690) 00:25:19.642 [2024-11-19 10:53:09.280244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.642 [2024-11-19 10:53:09.280250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb39690) 00:25:19.642 [2024-11-19 10:53:09.280258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.642 [2024-11-19 10:53:09.280264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb39690) 00:25:19.642 [2024-11-19 10:53:09.280272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.642 [2024-11-19 10:53:09.280278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb39690) 00:25:19.642 [2024-11-19 10:53:09.280287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.642 [2024-11-19 10:53:09.280297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b880, cid 5, qid 0 00:25:19.642 [2024-11-19 10:53:09.280302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b700, cid 4, qid 0 00:25:19.642 [2024-11-19 10:53:09.280306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9ba00, cid 6, qid 0 00:25:19.642 [2024-11-19 10:53:09.280310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9bb80, cid 7, qid 0 00:25:19.642 [2024-11-19 10:53:09.280453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.642 [2024-11-19 10:53:09.280459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.642 [2024-11-19 10:53:09.280461] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280465] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb39690): datao=0, datal=8192, cccid=5 00:25:19.642 [2024-11-19 10:53:09.280468] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9b880) on tqpair(0xb39690): expected_datao=0, payload_size=8192 00:25:19.642 [2024-11-19 10:53:09.280475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280497] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280500] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.642 [2024-11-19 10:53:09.280510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.642 [2024-11-19 10:53:09.280512] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280515] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb39690): datao=0, datal=512, cccid=4 00:25:19.642 [2024-11-19 10:53:09.280519] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9b700) on tqpair(0xb39690): expected_datao=0, payload_size=512 00:25:19.642 [2024-11-19 10:53:09.280523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280528] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280531] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.642 [2024-11-19 10:53:09.280540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.642 [2024-11-19 10:53:09.280543] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280546] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb39690): datao=0, datal=512, cccid=6 00:25:19.642 [2024-11-19 10:53:09.280550] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9ba00) on tqpair(0xb39690): expected_datao=0, payload_size=512 00:25:19.642 [2024-11-19 10:53:09.280553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280558] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280561] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.642 [2024-11-19 10:53:09.280570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.642 [2024-11-19 10:53:09.280573] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280576] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb39690): datao=0, datal=4096, cccid=7 00:25:19.642 [2024-11-19 10:53:09.280580] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9bb80) on tqpair(0xb39690): expected_datao=0, payload_size=4096 00:25:19.642 [2024-11-19 10:53:09.280584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280589] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280592] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.642 [2024-11-19 10:53:09.280604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.642 [2024-11-19 10:53:09.280607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b880) on tqpair=0xb39690 00:25:19.642 [2024-11-19 10:53:09.280619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.642 [2024-11-19 10:53:09.280624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.642 [2024-11-19 10:53:09.280627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b700) on tqpair=0xb39690 00:25:19.642 [2024-11-19 10:53:09.280639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.642 [2024-11-19 10:53:09.280644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.642 [2024-11-19 10:53:09.280646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9ba00) on tqpair=0xb39690 00:25:19.642 [2024-11-19 10:53:09.280655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.642 [2024-11-19 10:53:09.280661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.642 [2024-11-19 10:53:09.280664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.642 [2024-11-19 10:53:09.280667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9bb80) on tqpair=0xb39690 00:25:19.642 ===================================================== 00:25:19.642 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:19.642 ===================================================== 00:25:19.642 Controller Capabilities/Features 00:25:19.642 ================================ 00:25:19.642 Vendor ID: 8086 00:25:19.642 Subsystem Vendor ID: 8086 00:25:19.642 Serial Number: SPDK00000000000001 00:25:19.642 Model Number: SPDK bdev Controller 00:25:19.642 Firmware Version: 25.01 00:25:19.642 Recommended Arb Burst: 6 00:25:19.642 IEEE OUI Identifier: e4 d2 5c 00:25:19.642 Multi-path I/O 00:25:19.642 May have multiple subsystem ports: Yes 00:25:19.642 May have multiple controllers: Yes 00:25:19.642 Associated with SR-IOV VF: No 00:25:19.642 Max Data Transfer Size: 131072 00:25:19.642 Max Number of Namespaces: 32 00:25:19.642 Max Number of I/O Queues: 127 00:25:19.642 NVMe Specification Version (VS): 1.3 00:25:19.642 NVMe Specification Version (Identify): 1.3 00:25:19.642 Maximum Queue Entries: 128 00:25:19.642 Contiguous Queues Required: Yes 00:25:19.642 Arbitration Mechanisms Supported 00:25:19.642 Weighted Round Robin: Not Supported 00:25:19.642 Vendor Specific: Not Supported 00:25:19.642 Reset Timeout: 15000 ms 00:25:19.642 Doorbell Stride: 4 bytes 00:25:19.643 NVM Subsystem Reset: Not Supported 00:25:19.643 Command Sets Supported 00:25:19.643 NVM Command Set: Supported 00:25:19.643 Boot Partition: Not Supported 00:25:19.643 Memory Page Size Minimum: 4096 bytes 00:25:19.643 Memory Page Size Maximum: 4096 bytes 00:25:19.643 Persistent Memory Region: Not Supported 00:25:19.643 Optional Asynchronous Events Supported 00:25:19.643 Namespace Attribute Notices: Supported 00:25:19.643 Firmware Activation Notices: Not Supported 00:25:19.643 ANA Change Notices: Not Supported 00:25:19.643 PLE Aggregate Log Change Notices: Not Supported 00:25:19.643 LBA Status Info Alert Notices: Not Supported 00:25:19.643 EGE Aggregate Log Change Notices: Not Supported 00:25:19.643 Normal NVM Subsystem Shutdown event: Not Supported 00:25:19.643 Zone Descriptor Change Notices: Not Supported 00:25:19.643 Discovery Log Change Notices: Not Supported 00:25:19.643 Controller Attributes 00:25:19.643 128-bit Host Identifier: Supported 00:25:19.643 Non-Operational Permissive Mode: Not Supported 00:25:19.643 NVM Sets: Not Supported 00:25:19.643 Read Recovery Levels: Not Supported 00:25:19.643 Endurance Groups: Not Supported 00:25:19.643 Predictable Latency Mode: Not Supported 00:25:19.643 Traffic Based Keep ALive: Not Supported 00:25:19.643 Namespace Granularity: Not Supported 00:25:19.643 SQ Associations: Not Supported 00:25:19.643 UUID List: Not Supported 00:25:19.643 Multi-Domain Subsystem: Not Supported 00:25:19.643 Fixed Capacity Management: Not Supported 00:25:19.643 Variable Capacity Management: Not Supported 00:25:19.643 Delete Endurance Group: Not Supported 00:25:19.643 Delete NVM Set: Not Supported 00:25:19.643 Extended LBA Formats Supported: Not Supported 00:25:19.643 Flexible Data Placement Supported: Not Supported 00:25:19.643 00:25:19.643 Controller Memory Buffer Support 00:25:19.643 ================================ 00:25:19.643 Supported: No 00:25:19.643 00:25:19.643 Persistent Memory Region Support 00:25:19.643 ================================ 00:25:19.643 Supported: No 00:25:19.643 00:25:19.643 Admin Command Set Attributes 00:25:19.643 ============================ 00:25:19.643 Security Send/Receive: Not Supported 00:25:19.643 Format NVM: Not Supported 00:25:19.643 Firmware Activate/Download: Not Supported 00:25:19.643 Namespace Management: Not Supported 00:25:19.643 Device Self-Test: Not Supported 00:25:19.643 Directives: Not Supported 00:25:19.643 NVMe-MI: Not Supported 00:25:19.643 Virtualization Management: Not Supported 00:25:19.643 Doorbell Buffer Config: Not Supported 00:25:19.643 Get LBA Status Capability: Not Supported 00:25:19.643 Command & Feature Lockdown Capability: Not Supported 00:25:19.643 Abort Command Limit: 4 00:25:19.643 Async Event Request Limit: 4 00:25:19.643 Number of Firmware Slots: N/A 00:25:19.643 Firmware Slot 1 Read-Only: N/A 00:25:19.643 Firmware Activation Without Reset: N/A 00:25:19.643 Multiple Update Detection Support: N/A 00:25:19.643 Firmware Update Granularity: No Information Provided 00:25:19.643 Per-Namespace SMART Log: No 00:25:19.643 Asymmetric Namespace Access Log Page: Not Supported 00:25:19.643 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:19.643 Command Effects Log Page: Supported 00:25:19.643 Get Log Page Extended Data: Supported 00:25:19.643 Telemetry Log Pages: Not Supported 00:25:19.643 Persistent Event Log Pages: Not Supported 00:25:19.643 Supported Log Pages Log Page: May Support 00:25:19.643 Commands Supported & Effects Log Page: Not Supported 00:25:19.643 Feature Identifiers & Effects Log Page:May Support 00:25:19.643 NVMe-MI Commands & Effects Log Page: May Support 00:25:19.643 Data Area 4 for Telemetry Log: Not Supported 00:25:19.643 Error Log Page Entries Supported: 128 00:25:19.643 Keep Alive: Supported 00:25:19.643 Keep Alive Granularity: 10000 ms 00:25:19.643 00:25:19.643 NVM Command Set Attributes 00:25:19.643 ========================== 00:25:19.643 Submission Queue Entry Size 00:25:19.643 Max: 64 00:25:19.643 Min: 64 00:25:19.643 Completion Queue Entry Size 00:25:19.643 Max: 16 00:25:19.643 Min: 16 00:25:19.643 Number of Namespaces: 32 00:25:19.643 Compare Command: Supported 00:25:19.643 Write Uncorrectable Command: Not Supported 00:25:19.643 Dataset Management Command: Supported 00:25:19.643 Write Zeroes Command: Supported 00:25:19.643 Set Features Save Field: Not Supported 00:25:19.643 Reservations: Supported 00:25:19.643 Timestamp: Not Supported 00:25:19.643 Copy: Supported 00:25:19.643 Volatile Write Cache: Present 00:25:19.643 Atomic Write Unit (Normal): 1 00:25:19.643 Atomic Write Unit (PFail): 1 00:25:19.643 Atomic Compare & Write Unit: 1 00:25:19.643 Fused Compare & Write: Supported 00:25:19.643 Scatter-Gather List 00:25:19.643 SGL Command Set: Supported 00:25:19.643 SGL Keyed: Supported 00:25:19.643 SGL Bit Bucket Descriptor: Not Supported 00:25:19.643 SGL Metadata Pointer: Not Supported 00:25:19.643 Oversized SGL: Not Supported 00:25:19.643 SGL Metadata Address: Not Supported 00:25:19.643 SGL Offset: Supported 00:25:19.643 Transport SGL Data Block: Not Supported 00:25:19.643 Replay Protected Memory Block: Not Supported 00:25:19.643 00:25:19.643 Firmware Slot Information 00:25:19.643 ========================= 00:25:19.643 Active slot: 1 00:25:19.643 Slot 1 Firmware Revision: 25.01 00:25:19.643 00:25:19.643 00:25:19.643 Commands Supported and Effects 00:25:19.643 ============================== 00:25:19.643 Admin Commands 00:25:19.643 -------------- 00:25:19.643 Get Log Page (02h): Supported 00:25:19.643 Identify (06h): Supported 00:25:19.643 Abort (08h): Supported 00:25:19.643 Set Features (09h): Supported 00:25:19.643 Get Features (0Ah): Supported 00:25:19.643 Asynchronous Event Request (0Ch): Supported 00:25:19.643 Keep Alive (18h): Supported 00:25:19.643 I/O Commands 00:25:19.643 ------------ 00:25:19.643 Flush (00h): Supported LBA-Change 00:25:19.643 Write (01h): Supported LBA-Change 00:25:19.643 Read (02h): Supported 00:25:19.643 Compare (05h): Supported 00:25:19.643 Write Zeroes (08h): Supported LBA-Change 00:25:19.643 Dataset Management (09h): Supported LBA-Change 00:25:19.643 Copy (19h): Supported LBA-Change 00:25:19.643 00:25:19.643 Error Log 00:25:19.643 ========= 00:25:19.643 00:25:19.643 Arbitration 00:25:19.643 =========== 00:25:19.643 Arbitration Burst: 1 00:25:19.643 00:25:19.643 Power Management 00:25:19.643 ================ 00:25:19.643 Number of Power States: 1 00:25:19.643 Current Power State: Power State #0 00:25:19.643 Power State #0: 00:25:19.643 Max Power: 0.00 W 00:25:19.643 Non-Operational State: Operational 00:25:19.643 Entry Latency: Not Reported 00:25:19.643 Exit Latency: Not Reported 00:25:19.643 Relative Read Throughput: 0 00:25:19.643 Relative Read Latency: 0 00:25:19.643 Relative Write Throughput: 0 00:25:19.643 Relative Write Latency: 0 00:25:19.643 Idle Power: Not Reported 00:25:19.643 Active Power: Not Reported 00:25:19.643 Non-Operational Permissive Mode: Not Supported 00:25:19.643 00:25:19.643 Health Information 00:25:19.643 ================== 00:25:19.643 Critical Warnings: 00:25:19.643 Available Spare Space: OK 00:25:19.643 Temperature: OK 00:25:19.643 Device Reliability: OK 00:25:19.643 Read Only: No 00:25:19.643 Volatile Memory Backup: OK 00:25:19.643 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:19.643 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:19.643 Available Spare: 0% 00:25:19.643 Available Spare Threshold: 0% 00:25:19.643 Life Percentage Used:[2024-11-19 10:53:09.280748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.643 [2024-11-19 10:53:09.280753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb39690) 00:25:19.643 [2024-11-19 10:53:09.280758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.643 [2024-11-19 10:53:09.280769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9bb80, cid 7, qid 0 00:25:19.643 [2024-11-19 10:53:09.280844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.643 [2024-11-19 10:53:09.280849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.643 [2024-11-19 10:53:09.280852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.643 [2024-11-19 10:53:09.280855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9bb80) on tqpair=0xb39690 00:25:19.643 [2024-11-19 10:53:09.280881] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:19.644 [2024-11-19 10:53:09.280891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b100) on tqpair=0xb39690 00:25:19.644 [2024-11-19 10:53:09.280896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.644 [2024-11-19 10:53:09.280901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b280) on tqpair=0xb39690 00:25:19.644 [2024-11-19 10:53:09.280905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.644 [2024-11-19 10:53:09.280909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b400) on tqpair=0xb39690 00:25:19.644 [2024-11-19 10:53:09.280913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.644 [2024-11-19 10:53:09.280917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b580) on tqpair=0xb39690 00:25:19.644 [2024-11-19 10:53:09.280921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.644 [2024-11-19 10:53:09.280927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.280930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.280933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb39690) 00:25:19.644 [2024-11-19 10:53:09.280939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.644 [2024-11-19 10:53:09.280950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b580, cid 3, qid 0 00:25:19.644 [2024-11-19 10:53:09.281013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.644 [2024-11-19 10:53:09.281018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.644 [2024-11-19 10:53:09.281021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.281024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b580) on tqpair=0xb39690 00:25:19.644 [2024-11-19 10:53:09.281030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.281033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.281036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb39690) 00:25:19.644 [2024-11-19 10:53:09.281042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.644 [2024-11-19 10:53:09.281053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b580, cid 3, qid 0 00:25:19.644 [2024-11-19 10:53:09.281126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.644 [2024-11-19 10:53:09.281131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.644 [2024-11-19 10:53:09.281134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.281137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b580) on tqpair=0xb39690 00:25:19.644 [2024-11-19 10:53:09.281141] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:19.644 [2024-11-19 10:53:09.281145] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:19.644 [2024-11-19 10:53:09.281152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.281156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.281159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb39690) 00:25:19.644 [2024-11-19 10:53:09.281165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.644 [2024-11-19 10:53:09.281174] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b580, cid 3, qid 0 00:25:19.644 [2024-11-19 10:53:09.285210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.644 [2024-11-19 10:53:09.285217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.644 [2024-11-19 10:53:09.285220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.285224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b580) on tqpair=0xb39690 00:25:19.644 [2024-11-19 10:53:09.285232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.285236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.285239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb39690) 00:25:19.644 [2024-11-19 10:53:09.285244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.644 [2024-11-19 10:53:09.285255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9b580, cid 3, qid 0 00:25:19.644 [2024-11-19 10:53:09.285405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.644 [2024-11-19 10:53:09.285411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.644 [2024-11-19 10:53:09.285414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.644 [2024-11-19 10:53:09.285417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb9b580) on tqpair=0xb39690 00:25:19.644 [2024-11-19 10:53:09.285423] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:25:19.644 0% 00:25:19.644 Data Units Read: 0 00:25:19.644 Data Units Written: 0 00:25:19.644 Host Read Commands: 0 00:25:19.644 Host Write Commands: 0 00:25:19.644 Controller Busy Time: 0 minutes 00:25:19.644 Power Cycles: 0 00:25:19.644 Power On Hours: 0 hours 00:25:19.644 Unsafe Shutdowns: 0 00:25:19.644 Unrecoverable Media Errors: 0 00:25:19.644 Lifetime Error Log Entries: 0 00:25:19.644 Warning Temperature Time: 0 minutes 00:25:19.644 Critical Temperature Time: 0 minutes 00:25:19.644 00:25:19.644 Number of Queues 00:25:19.644 ================ 00:25:19.644 Number of I/O Submission Queues: 127 00:25:19.644 Number of I/O Completion Queues: 127 00:25:19.644 00:25:19.644 Active Namespaces 00:25:19.644 ================= 00:25:19.644 Namespace ID:1 00:25:19.644 Error Recovery Timeout: Unlimited 00:25:19.644 Command Set Identifier: NVM (00h) 00:25:19.644 Deallocate: Supported 00:25:19.644 Deallocated/Unwritten Error: Not Supported 00:25:19.644 Deallocated Read Value: Unknown 00:25:19.644 Deallocate in Write Zeroes: Not Supported 00:25:19.644 Deallocated Guard Field: 0xFFFF 00:25:19.644 Flush: Supported 00:25:19.644 Reservation: Supported 00:25:19.644 Namespace Sharing Capabilities: Multiple Controllers 00:25:19.644 Size (in LBAs): 131072 (0GiB) 00:25:19.644 Capacity (in LBAs): 131072 (0GiB) 00:25:19.644 Utilization (in LBAs): 131072 (0GiB) 00:25:19.644 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:19.644 EUI64: ABCDEF0123456789 00:25:19.644 UUID: f57c83d4-6e98-4839-83a4-4f6a3e807bf6 00:25:19.644 Thin Provisioning: Not Supported 00:25:19.644 Per-NS Atomic Units: Yes 00:25:19.644 Atomic Boundary Size (Normal): 0 00:25:19.644 Atomic Boundary Size (PFail): 0 00:25:19.644 Atomic Boundary Offset: 0 00:25:19.644 Maximum Single Source Range Length: 65535 00:25:19.644 Maximum Copy Length: 65535 00:25:19.644 Maximum Source Range Count: 1 00:25:19.644 NGUID/EUI64 Never Reused: No 00:25:19.644 Namespace Write Protected: No 00:25:19.644 Number of LBA Formats: 1 00:25:19.644 Current LBA Format: LBA Format #00 00:25:19.644 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:19.644 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.644 rmmod nvme_tcp 00:25:19.644 rmmod nvme_fabrics 00:25:19.644 rmmod nvme_keyring 00:25:19.644 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 4011967 ']' 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 4011967 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 4011967 ']' 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 4011967 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4011967 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4011967' 00:25:19.645 killing process with pid 4011967 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 4011967 00:25:19.645 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 4011967 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.904 10:53:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.440 10:53:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.440 00:25:22.440 real 0m9.353s 00:25:22.440 user 0m5.501s 00:25:22.440 sys 0m4.832s 00:25:22.440 10:53:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.440 10:53:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.440 ************************************ 00:25:22.440 END TEST nvmf_identify 00:25:22.440 ************************************ 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.441 ************************************ 00:25:22.441 START TEST nvmf_perf 00:25:22.441 ************************************ 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:22.441 * Looking for test storage... 00:25:22.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:22.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.441 --rc genhtml_branch_coverage=1 00:25:22.441 --rc genhtml_function_coverage=1 00:25:22.441 --rc genhtml_legend=1 00:25:22.441 --rc geninfo_all_blocks=1 00:25:22.441 --rc geninfo_unexecuted_blocks=1 00:25:22.441 00:25:22.441 ' 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:22.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.441 --rc genhtml_branch_coverage=1 00:25:22.441 --rc genhtml_function_coverage=1 00:25:22.441 --rc genhtml_legend=1 00:25:22.441 --rc geninfo_all_blocks=1 00:25:22.441 --rc geninfo_unexecuted_blocks=1 00:25:22.441 00:25:22.441 ' 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:22.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.441 --rc genhtml_branch_coverage=1 00:25:22.441 --rc genhtml_function_coverage=1 00:25:22.441 --rc genhtml_legend=1 00:25:22.441 --rc geninfo_all_blocks=1 00:25:22.441 --rc geninfo_unexecuted_blocks=1 00:25:22.441 00:25:22.441 ' 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:22.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.441 --rc genhtml_branch_coverage=1 00:25:22.441 --rc genhtml_function_coverage=1 00:25:22.441 --rc genhtml_legend=1 00:25:22.441 --rc geninfo_all_blocks=1 00:25:22.441 --rc geninfo_unexecuted_blocks=1 00:25:22.441 00:25:22.441 ' 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.441 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.442 10:53:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:27.716 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.716 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:27.716 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:27.716 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:27.716 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.976 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:27.977 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:27.977 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:27.977 Found net devices under 0000:86:00.0: cvl_0_0 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:27.977 Found net devices under 0000:86:00.1: cvl_0_1 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:27.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:25:27.977 00:25:27.977 --- 10.0.0.2 ping statistics --- 00:25:27.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.977 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:25:27.977 00:25:27.977 --- 10.0.0.1 ping statistics --- 00:25:27.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.977 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:27.977 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=4015589 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 4015589 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 4015589 ']' 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.237 10:53:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:28.237 [2024-11-19 10:53:17.855281] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:25:28.237 [2024-11-19 10:53:17.855324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.237 [2024-11-19 10:53:17.934697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.237 [2024-11-19 10:53:17.974816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.237 [2024-11-19 10:53:17.974851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.237 [2024-11-19 10:53:17.974858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.237 [2024-11-19 10:53:17.974864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.237 [2024-11-19 10:53:17.974868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.237 [2024-11-19 10:53:17.976418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.237 [2024-11-19 10:53:17.976526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.237 [2024-11-19 10:53:17.976633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.237 [2024-11-19 10:53:17.976634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.174 10:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.174 10:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:29.174 10:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:29.174 10:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:29.174 10:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:29.174 10:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.174 10:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:29.174 10:53:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:32.460 10:53:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:32.460 10:53:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:32.460 10:53:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:25:32.460 10:53:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:32.460 10:53:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:32.460 10:53:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:25:32.460 10:53:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:32.460 10:53:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:32.460 10:53:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:32.719 [2024-11-19 10:53:22.349699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.719 10:53:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:32.978 10:53:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:32.978 10:53:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:33.237 10:53:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:33.237 10:53:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:33.237 10:53:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.496 [2024-11-19 10:53:23.173985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.496 10:53:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:33.755 10:53:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:25:33.755 10:53:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:33.755 10:53:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:33.755 10:53:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:35.134 Initializing NVMe Controllers 00:25:35.134 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:25:35.134 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:25:35.134 Initialization complete. Launching workers. 00:25:35.134 ======================================================== 00:25:35.134 Latency(us) 00:25:35.134 Device Information : IOPS MiB/s Average min max 00:25:35.134 PCIE (0000:5e:00.0) NSID 1 from core 0: 99146.94 387.29 322.20 40.90 6241.37 00:25:35.134 ======================================================== 00:25:35.134 Total : 99146.94 387.29 322.20 40.90 6241.37 00:25:35.134 00:25:35.134 10:53:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:36.512 Initializing NVMe Controllers 00:25:36.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:36.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:36.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:36.512 Initialization complete. Launching workers. 00:25:36.512 ======================================================== 00:25:36.512 Latency(us) 00:25:36.512 Device Information : IOPS MiB/s Average min max 00:25:36.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.70 0.33 12022.43 109.69 45674.51 00:25:36.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 47.83 0.19 21223.04 7192.95 47884.64 00:25:36.512 ======================================================== 00:25:36.512 Total : 132.53 0.52 15342.95 109.69 47884.64 00:25:36.512 00:25:36.512 10:53:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.450 Initializing NVMe Controllers 00:25:37.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:37.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:37.450 Initialization complete. Launching workers. 00:25:37.450 ======================================================== 00:25:37.450 Latency(us) 00:25:37.450 Device Information : IOPS MiB/s Average min max 00:25:37.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11127.68 43.47 2875.75 423.18 6268.68 00:25:37.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3840.89 15.00 8374.21 7139.67 16010.98 00:25:37.450 ======================================================== 00:25:37.450 Total : 14968.57 58.47 4286.64 423.18 16010.98 00:25:37.450 00:25:37.450 10:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:37.450 10:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:37.450 10:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:40.136 Initializing NVMe Controllers 00:25:40.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:40.136 Controller IO queue size 128, less than required. 00:25:40.136 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.136 Controller IO queue size 128, less than required. 00:25:40.136 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:40.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:40.136 Initialization complete. Launching workers. 00:25:40.136 ======================================================== 00:25:40.136 Latency(us) 00:25:40.136 Device Information : IOPS MiB/s Average min max 00:25:40.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1801.95 450.49 72266.96 46238.50 136534.80 00:25:40.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.48 146.62 224709.23 69873.55 345602.22 00:25:40.136 ======================================================== 00:25:40.136 Total : 2388.43 597.11 109699.40 46238.50 345602.22 00:25:40.136 00:25:40.136 10:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:40.136 No valid NVMe controllers or AIO or URING devices found 00:25:40.136 Initializing NVMe Controllers 00:25:40.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:40.136 Controller IO queue size 128, less than required. 00:25:40.136 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.136 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:40.136 Controller IO queue size 128, less than required. 00:25:40.136 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.136 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:40.136 WARNING: Some requested NVMe devices were skipped 00:25:40.136 10:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:42.671 Initializing NVMe Controllers 00:25:42.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:42.671 Controller IO queue size 128, less than required. 00:25:42.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:42.671 Controller IO queue size 128, less than required. 00:25:42.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:42.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:42.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:42.671 Initialization complete. Launching workers. 00:25:42.671 00:25:42.671 ==================== 00:25:42.671 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:42.671 TCP transport: 00:25:42.671 polls: 11069 00:25:42.671 idle_polls: 7764 00:25:42.671 sock_completions: 3305 00:25:42.671 nvme_completions: 6379 00:25:42.671 submitted_requests: 9604 00:25:42.671 queued_requests: 1 00:25:42.671 00:25:42.671 ==================== 00:25:42.671 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:42.671 TCP transport: 00:25:42.671 polls: 11002 00:25:42.671 idle_polls: 7375 00:25:42.671 sock_completions: 3627 00:25:42.671 nvme_completions: 6503 00:25:42.671 submitted_requests: 9760 00:25:42.671 queued_requests: 1 00:25:42.671 ======================================================== 00:25:42.671 Latency(us) 00:25:42.671 Device Information : IOPS MiB/s Average min max 00:25:42.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1594.36 398.59 82707.11 55561.04 148873.17 00:25:42.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1625.35 406.34 78910.67 47310.48 112024.82 00:25:42.671 ======================================================== 00:25:42.671 Total : 3219.71 804.93 80790.62 47310.48 148873.17 00:25:42.671 00:25:42.671 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:42.671 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.930 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:42.931 rmmod nvme_tcp 00:25:42.931 rmmod nvme_fabrics 00:25:42.931 rmmod nvme_keyring 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 4015589 ']' 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 4015589 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 4015589 ']' 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 4015589 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4015589 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4015589' 00:25:42.931 killing process with pid 4015589 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 4015589 00:25:42.931 10:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 4015589 00:25:44.834 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:44.834 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:44.834 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:44.834 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:44.834 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:44.834 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:44.834 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:45.094 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:45.094 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:45.094 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.094 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.094 10:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.000 10:53:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:47.000 00:25:47.000 real 0m24.945s 00:25:47.000 user 1m6.436s 00:25:47.000 sys 0m8.188s 00:25:47.000 10:53:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.000 10:53:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:47.000 ************************************ 00:25:47.000 END TEST nvmf_perf 00:25:47.000 ************************************ 00:25:47.000 10:53:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:47.000 10:53:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.000 10:53:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.000 10:53:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.000 ************************************ 00:25:47.000 START TEST nvmf_fio_host 00:25:47.000 ************************************ 00:25:47.000 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:47.260 * Looking for test storage... 00:25:47.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:47.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.260 --rc genhtml_branch_coverage=1 00:25:47.260 --rc genhtml_function_coverage=1 00:25:47.260 --rc genhtml_legend=1 00:25:47.260 --rc geninfo_all_blocks=1 00:25:47.260 --rc geninfo_unexecuted_blocks=1 00:25:47.260 00:25:47.260 ' 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:47.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.260 --rc genhtml_branch_coverage=1 00:25:47.260 --rc genhtml_function_coverage=1 00:25:47.260 --rc genhtml_legend=1 00:25:47.260 --rc geninfo_all_blocks=1 00:25:47.260 --rc geninfo_unexecuted_blocks=1 00:25:47.260 00:25:47.260 ' 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:47.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.260 --rc genhtml_branch_coverage=1 00:25:47.260 --rc genhtml_function_coverage=1 00:25:47.260 --rc genhtml_legend=1 00:25:47.260 --rc geninfo_all_blocks=1 00:25:47.260 --rc geninfo_unexecuted_blocks=1 00:25:47.260 00:25:47.260 ' 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:47.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.260 --rc genhtml_branch_coverage=1 00:25:47.260 --rc genhtml_function_coverage=1 00:25:47.260 --rc genhtml_legend=1 00:25:47.260 --rc geninfo_all_blocks=1 00:25:47.260 --rc geninfo_unexecuted_blocks=1 00:25:47.260 00:25:47.260 ' 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.260 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.261 10:53:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.841 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:53.842 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:53.842 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:53.842 Found net devices under 0000:86:00.0: cvl_0_0 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:53.842 Found net devices under 0000:86:00.1: cvl_0_1 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:53.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:25:53.842 00:25:53.842 --- 10.0.0.2 ping statistics --- 00:25:53.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.842 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:25:53.842 00:25:53.842 --- 10.0.0.1 ping statistics --- 00:25:53.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.842 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4021940 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4021940 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 4021940 ']' 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.842 10:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.842 [2024-11-19 10:53:42.998903] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:25:53.842 [2024-11-19 10:53:42.998948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.842 [2024-11-19 10:53:43.074874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.842 [2024-11-19 10:53:43.116972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.842 [2024-11-19 10:53:43.117008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.842 [2024-11-19 10:53:43.117015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.842 [2024-11-19 10:53:43.117021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.842 [2024-11-19 10:53:43.117026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.842 [2024-11-19 10:53:43.118572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.843 [2024-11-19 10:53:43.118686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.843 [2024-11-19 10:53:43.118796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.843 [2024-11-19 10:53:43.118797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:53.843 10:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.843 10:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:53.843 10:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:53.843 [2024-11-19 10:53:43.374978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.843 10:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:53.843 10:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:53.843 10:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.843 10:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:54.102 Malloc1 00:25:54.102 10:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:54.102 10:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:54.361 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.619 [2024-11-19 10:53:44.254480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.619 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:54.878 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:54.878 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:54.878 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:54.878 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:54.878 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:54.878 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:54.879 10:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:55.138 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:55.138 fio-3.35 00:25:55.138 Starting 1 thread 00:25:57.672 00:25:57.672 test: (groupid=0, jobs=1): err= 0: pid=4022321: Tue Nov 19 10:53:47 2024 00:25:57.672 read: IOPS=11.9k, BW=46.3MiB/s (48.6MB/s)(92.9MiB/2005msec) 00:25:57.672 slat (nsec): min=1533, max=241075, avg=1740.13, stdev=2196.44 00:25:57.672 clat (usec): min=3104, max=10502, avg=5948.52, stdev=443.17 00:25:57.672 lat (usec): min=3135, max=10503, avg=5950.26, stdev=443.00 00:25:57.672 clat percentiles (usec): 00:25:57.672 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:25:57.672 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:25:57.672 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6456], 95.00th=[ 6652], 00:25:57.672 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8291], 99.95th=[ 9765], 00:25:57.672 | 99.99th=[10421] 00:25:57.672 bw ( KiB/s): min=46456, max=47968, per=99.97%, avg=47436.00, stdev=692.96, samples=4 00:25:57.672 iops : min=11614, max=11992, avg=11859.00, stdev=173.24, samples=4 00:25:57.672 write: IOPS=11.8k, BW=46.1MiB/s (48.4MB/s)(92.5MiB/2005msec); 0 zone resets 00:25:57.672 slat (nsec): min=1564, max=226504, avg=1799.04, stdev=1643.91 00:25:57.672 clat (usec): min=2411, max=9667, avg=4814.80, stdev=365.77 00:25:57.672 lat (usec): min=2426, max=9668, avg=4816.60, stdev=365.69 00:25:57.672 clat percentiles (usec): 00:25:57.672 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:25:57.672 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:25:57.672 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:25:57.672 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 7504], 99.95th=[ 8356], 00:25:57.672 | 99.99th=[ 9634] 00:25:57.672 bw ( KiB/s): min=46720, max=47808, per=99.99%, avg=47234.00, stdev=463.37, samples=4 00:25:57.672 iops : min=11680, max=11952, avg=11808.50, stdev=115.84, samples=4 00:25:57.672 lat (msec) : 4=0.62%, 10=99.36%, 20=0.02% 00:25:57.672 cpu : usr=75.30%, sys=23.65%, ctx=65, majf=0, minf=3 00:25:57.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:57.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:57.672 issued rwts: total=23784,23678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:57.672 00:25:57.672 Run status group 0 (all jobs): 00:25:57.672 READ: bw=46.3MiB/s (48.6MB/s), 46.3MiB/s-46.3MiB/s (48.6MB/s-48.6MB/s), io=92.9MiB (97.4MB), run=2005-2005msec 00:25:57.672 WRITE: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=92.5MiB (97.0MB), run=2005-2005msec 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:57.672 10:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:57.931 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:57.931 fio-3.35 00:25:57.931 Starting 1 thread 00:26:00.468 00:26:00.468 test: (groupid=0, jobs=1): err= 0: pid=4022886: Tue Nov 19 10:53:49 2024 00:26:00.468 read: IOPS=10.7k, BW=167MiB/s (175MB/s)(335MiB/2005msec) 00:26:00.468 slat (nsec): min=2465, max=84713, avg=2822.66, stdev=1295.88 00:26:00.468 clat (usec): min=1868, max=49795, avg=6935.53, stdev=3378.16 00:26:00.468 lat (usec): min=1871, max=49798, avg=6938.35, stdev=3378.20 00:26:00.468 clat percentiles (usec): 00:26:00.468 | 1.00th=[ 3621], 5.00th=[ 4424], 10.00th=[ 4817], 20.00th=[ 5407], 00:26:00.468 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:26:00.468 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[ 9241], 00:26:00.468 | 99.00th=[11600], 99.50th=[42730], 99.90th=[49021], 99.95th=[49546], 00:26:00.468 | 99.99th=[49546] 00:26:00.468 bw ( KiB/s): min=79776, max=98176, per=51.55%, avg=88104.00, stdev=8732.24, samples=4 00:26:00.468 iops : min= 4986, max= 6136, avg=5506.50, stdev=545.77, samples=4 00:26:00.468 write: IOPS=6484, BW=101MiB/s (106MB/s)(180MiB/1778msec); 0 zone resets 00:26:00.468 slat (usec): min=28, max=327, avg=31.57, stdev= 6.68 00:26:00.468 clat (usec): min=3251, max=13870, avg=8589.10, stdev=1450.39 00:26:00.468 lat (usec): min=3282, max=13986, avg=8620.67, stdev=1451.61 00:26:00.468 clat percentiles (usec): 00:26:00.468 | 1.00th=[ 5538], 5.00th=[ 6390], 10.00th=[ 6849], 20.00th=[ 7373], 00:26:00.468 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:26:00.468 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:26:00.468 | 99.00th=[12256], 99.50th=[12780], 99.90th=[13435], 99.95th=[13698], 00:26:00.468 | 99.99th=[13698] 00:26:00.468 bw ( KiB/s): min=82944, max=102528, per=88.29%, avg=91600.00, stdev=9227.16, samples=4 00:26:00.468 iops : min= 5184, max= 6408, avg=5725.00, stdev=576.70, samples=4 00:26:00.468 lat (msec) : 2=0.02%, 4=1.55%, 10=90.68%, 20=7.37%, 50=0.39% 00:26:00.468 cpu : usr=84.58%, sys=13.97%, ctx=220, majf=0, minf=3 00:26:00.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:00.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:00.468 issued rwts: total=21416,11529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:00.468 00:26:00.468 Run status group 0 (all jobs): 00:26:00.468 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=335MiB (351MB), run=2005-2005msec 00:26:00.468 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=180MiB (189MB), run=1778-1778msec 00:26:00.468 10:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:00.468 10:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:00.468 10:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:00.468 10:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:00.468 10:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:00.468 10:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:00.468 10:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:00.468 rmmod nvme_tcp 00:26:00.468 rmmod nvme_fabrics 00:26:00.468 rmmod nvme_keyring 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 4021940 ']' 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 4021940 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 4021940 ']' 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 4021940 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4021940 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4021940' 00:26:00.468 killing process with pid 4021940 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 4021940 00:26:00.468 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 4021940 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.727 10:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.635 10:53:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:02.635 00:26:02.635 real 0m15.614s 00:26:02.635 user 0m45.239s 00:26:02.635 sys 0m6.421s 00:26:02.635 10:53:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.635 10:53:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.635 ************************************ 00:26:02.635 END TEST nvmf_fio_host 00:26:02.635 ************************************ 00:26:02.635 10:53:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:02.635 10:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:02.635 10:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.635 10:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.895 ************************************ 00:26:02.895 START TEST nvmf_failover 00:26:02.895 ************************************ 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:02.895 * Looking for test storage... 00:26:02.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.895 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:02.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.896 --rc genhtml_branch_coverage=1 00:26:02.896 --rc genhtml_function_coverage=1 00:26:02.896 --rc genhtml_legend=1 00:26:02.896 --rc geninfo_all_blocks=1 00:26:02.896 --rc geninfo_unexecuted_blocks=1 00:26:02.896 00:26:02.896 ' 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:02.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.896 --rc genhtml_branch_coverage=1 00:26:02.896 --rc genhtml_function_coverage=1 00:26:02.896 --rc genhtml_legend=1 00:26:02.896 --rc geninfo_all_blocks=1 00:26:02.896 --rc geninfo_unexecuted_blocks=1 00:26:02.896 00:26:02.896 ' 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:02.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.896 --rc genhtml_branch_coverage=1 00:26:02.896 --rc genhtml_function_coverage=1 00:26:02.896 --rc genhtml_legend=1 00:26:02.896 --rc geninfo_all_blocks=1 00:26:02.896 --rc geninfo_unexecuted_blocks=1 00:26:02.896 00:26:02.896 ' 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:02.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.896 --rc genhtml_branch_coverage=1 00:26:02.896 --rc genhtml_function_coverage=1 00:26:02.896 --rc genhtml_legend=1 00:26:02.896 --rc geninfo_all_blocks=1 00:26:02.896 --rc geninfo_unexecuted_blocks=1 00:26:02.896 00:26:02.896 ' 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.896 10:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.470 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:09.470 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:09.471 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:09.471 Found net devices under 0000:86:00.0: cvl_0_0 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:09.471 Found net devices under 0000:86:00.1: cvl_0_1 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:09.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:26:09.471 00:26:09.471 --- 10.0.0.2 ping statistics --- 00:26:09.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.471 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:26:09.471 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:09.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:26:09.471 00:26:09.472 --- 10.0.0.1 ping statistics --- 00:26:09.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.472 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=4026861 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 4026861 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4026861 ']' 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.472 10:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:09.472 [2024-11-19 10:53:58.661546] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:26:09.472 [2024-11-19 10:53:58.661594] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.472 [2024-11-19 10:53:58.739876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:09.472 [2024-11-19 10:53:58.782562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.472 [2024-11-19 10:53:58.782601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.472 [2024-11-19 10:53:58.782608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.472 [2024-11-19 10:53:58.782614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.472 [2024-11-19 10:53:58.782619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.472 [2024-11-19 10:53:58.784022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.472 [2024-11-19 10:53:58.784130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.472 [2024-11-19 10:53:58.784131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:09.731 10:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.731 10:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:09.731 10:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:09.731 10:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:09.731 10:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:09.731 10:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.731 10:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:09.990 [2024-11-19 10:53:59.690162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.990 10:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:10.249 Malloc0 00:26:10.249 10:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:10.507 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:10.766 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.766 [2024-11-19 10:54:00.542968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.025 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:11.025 [2024-11-19 10:54:00.739524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:11.025 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:11.284 [2024-11-19 10:54:00.936187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:11.284 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4027193 00:26:11.284 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:11.284 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:11.284 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4027193 /var/tmp/bdevperf.sock 00:26:11.284 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4027193 ']' 00:26:11.284 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:11.284 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.284 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:11.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:11.284 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.284 10:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:11.547 10:54:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.547 10:54:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:11.547 10:54:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:11.815 NVMe0n1 00:26:11.815 10:54:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:12.073 00:26:12.073 10:54:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4027472 00:26:12.073 10:54:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:12.073 10:54:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:13.450 10:54:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.450 [2024-11-19 10:54:03.039590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21202d0 is same with the state(6) to be set 00:26:13.450 [2024-11-19 10:54:03.039659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21202d0 is same with the state(6) to be set 00:26:13.450 [2024-11-19 10:54:03.039668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21202d0 is same with the state(6) to be set 00:26:13.450 [2024-11-19 10:54:03.039675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21202d0 is same with the state(6) to be set 00:26:13.450 [2024-11-19 10:54:03.039681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21202d0 is same with the state(6) to be set 00:26:13.450 [2024-11-19 10:54:03.039687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21202d0 is same with the state(6) to be set 00:26:13.450 [2024-11-19 10:54:03.039694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21202d0 is same with the state(6) to be set 00:26:13.450 10:54:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:16.740 10:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:16.740 00:26:16.740 10:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:17.000 [2024-11-19 10:54:06.673653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.673998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.674003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.674009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.674015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.674020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.674027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.674032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.674038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.674044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.674049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.000 [2024-11-19 10:54:06.674055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 [2024-11-19 10:54:06.674143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121060 is same with the state(6) to be set 00:26:17.001 10:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:20.290 10:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.290 [2024-11-19 10:54:09.893641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.290 10:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:21.227 10:54:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:21.486 [2024-11-19 10:54:11.096328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.486 [2024-11-19 10:54:11.096623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 [2024-11-19 10:54:11.096959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121e30 is same with the state(6) to be set 00:26:21.487 10:54:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 4027472 00:26:28.058 { 00:26:28.058 "results": [ 00:26:28.058 { 00:26:28.058 "job": "NVMe0n1", 00:26:28.058 "core_mask": "0x1", 00:26:28.058 "workload": "verify", 00:26:28.058 "status": "finished", 00:26:28.058 "verify_range": { 00:26:28.058 "start": 0, 00:26:28.058 "length": 16384 00:26:28.058 }, 00:26:28.058 "queue_depth": 128, 00:26:28.058 "io_size": 4096, 00:26:28.058 "runtime": 15.002275, 00:26:28.058 "iops": 11132.51156907869, 00:26:28.058 "mibps": 43.48637331671363, 00:26:28.058 "io_failed": 17573, 00:26:28.058 "io_timeout": 0, 00:26:28.058 "avg_latency_us": 10382.004627952489, 00:26:28.058 "min_latency_us": 413.50095238095236, 00:26:28.058 "max_latency_us": 21346.01142857143 00:26:28.058 } 00:26:28.058 ], 00:26:28.058 "core_count": 1 00:26:28.058 } 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 4027193 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4027193 ']' 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4027193 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4027193 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4027193' 00:26:28.058 killing process with pid 4027193 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4027193 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4027193 00:26:28.058 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.058 [2024-11-19 10:54:01.015582] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:26:28.058 [2024-11-19 10:54:01.015642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027193 ] 00:26:28.058 [2024-11-19 10:54:01.093669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.058 [2024-11-19 10:54:01.137307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.058 Running I/O for 15 seconds... 00:26:28.058 11226.00 IOPS, 43.85 MiB/s [2024-11-19T09:54:17.850Z] [2024-11-19 10:54:03.040248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.058 [2024-11-19 10:54:03.040281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.058 [2024-11-19 10:54:03.040297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.059 [2024-11-19 10:54:03.040785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.059 [2024-11-19 10:54:03.040801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.059 [2024-11-19 10:54:03.040809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.059 [2024-11-19 10:54:03.040815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.060 [2024-11-19 10:54:03.040831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.060 [2024-11-19 10:54:03.040847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.060 [2024-11-19 10:54:03.040862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.060 [2024-11-19 10:54:03.040877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.060 [2024-11-19 10:54:03.040891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.060 [2024-11-19 10:54:03.040906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.040921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.040936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.040951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.040966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.040981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.040989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.040996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.060 [2024-11-19 10:54:03.041272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.060 [2024-11-19 10:54:03.041287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.060 [2024-11-19 10:54:03.041303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.060 [2024-11-19 10:54:03.041311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.060 [2024-11-19 10:54:03.041318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.061 [2024-11-19 10:54:03.041332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.061 [2024-11-19 10:54:03.041347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.061 [2024-11-19 10:54:03.041362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.061 [2024-11-19 10:54:03.041377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.061 [2024-11-19 10:54:03.041392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.061 [2024-11-19 10:54:03.041738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.061 [2024-11-19 10:54:03.041755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.061 [2024-11-19 10:54:03.041770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.061 [2024-11-19 10:54:03.041784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.061 [2024-11-19 10:54:03.041792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.061 [2024-11-19 10:54:03.041801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.041809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.062 [2024-11-19 10:54:03.041816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.041824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.062 [2024-11-19 10:54:03.041830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.041839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.062 [2024-11-19 10:54:03.041845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.041854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.062 [2024-11-19 10:54:03.041860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.041868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.062 [2024-11-19 10:54:03.041875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.041883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.062 [2024-11-19 10:54:03.041891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.041913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.041920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98312 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.041927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.041936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.041942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.041948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98320 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.041954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.041962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.041967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.041973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98328 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.041979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.041986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.041991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.041997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98336 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.042006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.042013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.042019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.042024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98344 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.042030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.042037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.042042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.042047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98352 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.042053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.042060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.042065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.042070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98360 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.042077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.042083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.042088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.042094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98368 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.042101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.042109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.042114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.042120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98376 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.042126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.042133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.042138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.042143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98384 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.042150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.042157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.042162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.042168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98392 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.042174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.042180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.042185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.042194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98400 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.042200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.042212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.042217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.042223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98408 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.042230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.042237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.042241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.062 [2024-11-19 10:54:03.042247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98416 len:8 PRP1 0x0 PRP2 0x0 00:26:28.062 [2024-11-19 10:54:03.042253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.062 [2024-11-19 10:54:03.042260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.062 [2024-11-19 10:54:03.042265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.063 [2024-11-19 10:54:03.042271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98424 len:8 PRP1 0x0 PRP2 0x0 00:26:28.063 [2024-11-19 10:54:03.042278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.063 [2024-11-19 10:54:03.042289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.063 [2024-11-19 10:54:03.042295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98432 len:8 PRP1 0x0 PRP2 0x0 00:26:28.063 [2024-11-19 10:54:03.042301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.063 [2024-11-19 10:54:03.042314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.063 [2024-11-19 10:54:03.042319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98440 len:8 PRP1 0x0 PRP2 0x0 00:26:28.063 [2024-11-19 10:54:03.042326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.063 [2024-11-19 10:54:03.042338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.063 [2024-11-19 10:54:03.042345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98448 len:8 PRP1 0x0 PRP2 0x0 00:26:28.063 [2024-11-19 10:54:03.042351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.063 [2024-11-19 10:54:03.042362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.063 [2024-11-19 10:54:03.042368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98456 len:8 PRP1 0x0 PRP2 0x0 00:26:28.063 [2024-11-19 10:54:03.042374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.063 [2024-11-19 10:54:03.042388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.063 [2024-11-19 10:54:03.042395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98464 len:8 PRP1 0x0 PRP2 0x0 00:26:28.063 [2024-11-19 10:54:03.042402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.063 [2024-11-19 10:54:03.042413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.063 [2024-11-19 10:54:03.042419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98472 len:8 PRP1 0x0 PRP2 0x0 00:26:28.063 [2024-11-19 10:54:03.042425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.063 [2024-11-19 10:54:03.042437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.063 [2024-11-19 10:54:03.042443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98480 len:8 PRP1 0x0 PRP2 0x0 00:26:28.063 [2024-11-19 10:54:03.042450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042491] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:28.063 [2024-11-19 10:54:03.042513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.063 [2024-11-19 10:54:03.042520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.063 [2024-11-19 10:54:03.042536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.063 [2024-11-19 10:54:03.042549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.063 [2024-11-19 10:54:03.042567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:03.042573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:28.063 [2024-11-19 10:54:03.045316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:28.063 [2024-11-19 10:54:03.045344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bab340 (9): Bad file descriptor 00:26:28.063 [2024-11-19 10:54:03.230786] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:28.063 10184.50 IOPS, 39.78 MiB/s [2024-11-19T09:54:17.855Z] 10588.67 IOPS, 41.36 MiB/s [2024-11-19T09:54:17.855Z] 10849.75 IOPS, 42.38 MiB/s [2024-11-19T09:54:17.855Z] [2024-11-19 10:54:06.675195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.063 [2024-11-19 10:54:06.675427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.063 [2024-11-19 10:54:06.675433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.064 [2024-11-19 10:54:06.675709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.064 [2024-11-19 10:54:06.675724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.064 [2024-11-19 10:54:06.675738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.064 [2024-11-19 10:54:06.675752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.064 [2024-11-19 10:54:06.675767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.064 [2024-11-19 10:54:06.675781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.064 [2024-11-19 10:54:06.675795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.064 [2024-11-19 10:54:06.675810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.064 [2024-11-19 10:54:06.675818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.675987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.675993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.065 [2024-11-19 10:54:06.676301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.065 [2024-11-19 10:54:06.676308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.066 [2024-11-19 10:54:06.676421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.066 [2024-11-19 10:54:06.676436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.066 [2024-11-19 10:54:06.676450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.066 [2024-11-19 10:54:06.676464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.066 [2024-11-19 10:54:06.676478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.066 [2024-11-19 10:54:06.676491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.066 [2024-11-19 10:54:06.676785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.066 [2024-11-19 10:54:06.676791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.676799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.067 [2024-11-19 10:54:06.676805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.676813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.067 [2024-11-19 10:54:06.676819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.676827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.067 [2024-11-19 10:54:06.676833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.676841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.067 [2024-11-19 10:54:06.676849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.676857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.067 [2024-11-19 10:54:06.676863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.676870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.067 [2024-11-19 10:54:06.676877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.676885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.067 [2024-11-19 10:54:06.676893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.676912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.676919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91928 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.676925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.676934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.067 [2024-11-19 10:54:06.676939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.676945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91936 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.676951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.676958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.067 [2024-11-19 10:54:06.676963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.676968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91944 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.676974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.676981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.067 [2024-11-19 10:54:06.676986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.676991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91952 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.676999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.677005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.067 [2024-11-19 10:54:06.677010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.677015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91960 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.677022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.677028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.067 [2024-11-19 10:54:06.677033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.677039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91968 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.677045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.677051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.067 [2024-11-19 10:54:06.677056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.677063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91976 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.677069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.677076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.067 [2024-11-19 10:54:06.677081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.677088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91984 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.677094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.677101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.067 [2024-11-19 10:54:06.677106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.677111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91992 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.677117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.677124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.067 [2024-11-19 10:54:06.677128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.677136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92000 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.677142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.677149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.067 [2024-11-19 10:54:06.677154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.677160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92008 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.677166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.677173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.067 [2024-11-19 10:54:06.677177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.067 [2024-11-19 10:54:06.677183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91304 len:8 PRP1 0x0 PRP2 0x0 00:26:28.067 [2024-11-19 10:54:06.677190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.688713] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:28.067 [2024-11-19 10:54:06.688746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.067 [2024-11-19 10:54:06.688756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.688766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.067 [2024-11-19 10:54:06.688774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.688784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.067 [2024-11-19 10:54:06.688792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.688802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.067 [2024-11-19 10:54:06.688810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.067 [2024-11-19 10:54:06.688819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:28.067 [2024-11-19 10:54:06.688846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bab340 (9): Bad file descriptor 00:26:28.067 [2024-11-19 10:54:06.692578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:28.068 [2024-11-19 10:54:06.720159] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:28.068 10838.20 IOPS, 42.34 MiB/s [2024-11-19T09:54:17.860Z] 10951.17 IOPS, 42.78 MiB/s [2024-11-19T09:54:17.860Z] 11011.43 IOPS, 43.01 MiB/s [2024-11-19T09:54:17.860Z] 11079.62 IOPS, 43.28 MiB/s [2024-11-19T09:54:17.860Z] 11135.44 IOPS, 43.50 MiB/s [2024-11-19T09:54:17.860Z] [2024-11-19 10:54:11.097306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.068 [2024-11-19 10:54:11.097801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.068 [2024-11-19 10:54:11.097808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.097987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.097995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.069 [2024-11-19 10:54:11.098250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.069 [2024-11-19 10:54:11.098258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.070 [2024-11-19 10:54:11.098692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.070 [2024-11-19 10:54:11.098706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.070 [2024-11-19 10:54:11.098713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.098990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.098996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.099010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.099025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.099039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.099053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.099067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.099082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.099096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.099110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.099124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.099140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.071 [2024-11-19 10:54:11.099154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.071 [2024-11-19 10:54:11.099182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111208 len:8 PRP1 0x0 PRP2 0x0 00:26:28.071 [2024-11-19 10:54:11.099189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:28.071 [2024-11-19 10:54:11.099206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:28.071 [2024-11-19 10:54:11.099213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111216 len:8 PRP1 0x0 PRP2 0x0 00:26:28.071 [2024-11-19 10:54:11.099219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099261] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:28.071 [2024-11-19 10:54:11.099284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.071 [2024-11-19 10:54:11.099292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.071 [2024-11-19 10:54:11.099299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.071 [2024-11-19 10:54:11.099305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.072 [2024-11-19 10:54:11.099312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.072 [2024-11-19 10:54:11.099318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.072 [2024-11-19 10:54:11.099325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.072 [2024-11-19 10:54:11.099331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.072 [2024-11-19 10:54:11.099338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:28.072 [2024-11-19 10:54:11.102102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:28.072 [2024-11-19 10:54:11.102132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bab340 (9): Bad file descriptor 00:26:28.072 [2024-11-19 10:54:11.245904] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:28.072 10984.90 IOPS, 42.91 MiB/s [2024-11-19T09:54:17.864Z] 11022.73 IOPS, 43.06 MiB/s [2024-11-19T09:54:17.864Z] 11056.42 IOPS, 43.19 MiB/s [2024-11-19T09:54:17.864Z] 11078.92 IOPS, 43.28 MiB/s [2024-11-19T09:54:17.864Z] 11109.14 IOPS, 43.40 MiB/s 00:26:28.072 Latency(us) 00:26:28.072 [2024-11-19T09:54:17.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.072 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:28.072 Verification LBA range: start 0x0 length 0x4000 00:26:28.072 NVMe0n1 : 15.00 11132.51 43.49 1171.36 0.00 10382.00 413.50 21346.01 00:26:28.072 [2024-11-19T09:54:17.864Z] =================================================================================================================== 00:26:28.072 [2024-11-19T09:54:17.864Z] Total : 11132.51 43.49 1171.36 0.00 10382.00 413.50 21346.01 00:26:28.072 Received shutdown signal, test time was about 15.000000 seconds 00:26:28.072 00:26:28.072 Latency(us) 00:26:28.072 [2024-11-19T09:54:17.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.072 [2024-11-19T09:54:17.864Z] =================================================================================================================== 00:26:28.072 [2024-11-19T09:54:17.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4030386 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4030386 /var/tmp/bdevperf.sock 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4030386 ']' 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:28.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:28.072 [2024-11-19 10:54:17.661276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:28.072 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:28.331 [2024-11-19 10:54:17.849790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:28.331 10:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:28.331 NVMe0n1 00:26:28.590 10:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:28.847 00:26:28.847 10:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:29.105 00:26:29.105 10:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:29.105 10:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:29.362 10:54:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:29.625 10:54:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:32.911 10:54:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:32.911 10:54:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:32.911 10:54:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4031108 00:26:32.911 10:54:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:32.911 10:54:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 4031108 00:26:33.849 { 00:26:33.849 "results": [ 00:26:33.849 { 00:26:33.849 "job": "NVMe0n1", 00:26:33.849 "core_mask": "0x1", 00:26:33.849 "workload": "verify", 00:26:33.849 "status": "finished", 00:26:33.849 "verify_range": { 00:26:33.849 "start": 0, 00:26:33.849 "length": 16384 00:26:33.849 }, 00:26:33.849 "queue_depth": 128, 00:26:33.849 "io_size": 4096, 00:26:33.849 "runtime": 1.005197, 00:26:33.849 "iops": 11414.677918855707, 00:26:33.849 "mibps": 44.588585620530104, 00:26:33.849 "io_failed": 0, 00:26:33.849 "io_timeout": 0, 00:26:33.849 "avg_latency_us": 11173.593711662807, 00:26:33.849 "min_latency_us": 2090.9104761904764, 00:26:33.849 "max_latency_us": 16103.131428571429 00:26:33.849 } 00:26:33.849 ], 00:26:33.849 "core_count": 1 00:26:33.849 } 00:26:33.849 10:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:33.849 [2024-11-19 10:54:17.279621] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:26:33.849 [2024-11-19 10:54:17.279678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030386 ] 00:26:33.849 [2024-11-19 10:54:17.357277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.849 [2024-11-19 10:54:17.394673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.849 [2024-11-19 10:54:19.160337] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:33.849 [2024-11-19 10:54:19.160382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.849 [2024-11-19 10:54:19.160393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.849 [2024-11-19 10:54:19.160401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.849 [2024-11-19 10:54:19.160408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.849 [2024-11-19 10:54:19.160415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.849 [2024-11-19 10:54:19.160422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.849 [2024-11-19 10:54:19.160429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.849 [2024-11-19 10:54:19.160436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.849 [2024-11-19 10:54:19.160442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:33.849 [2024-11-19 10:54:19.160466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:33.849 [2024-11-19 10:54:19.160480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5d340 (9): Bad file descriptor 00:26:33.849 [2024-11-19 10:54:19.171010] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:33.849 Running I/O for 1 seconds... 00:26:33.849 11346.00 IOPS, 44.32 MiB/s 00:26:33.849 Latency(us) 00:26:33.849 [2024-11-19T09:54:23.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.849 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:33.849 Verification LBA range: start 0x0 length 0x4000 00:26:33.849 NVMe0n1 : 1.01 11414.68 44.59 0.00 0.00 11173.59 2090.91 16103.13 00:26:33.849 [2024-11-19T09:54:23.641Z] =================================================================================================================== 00:26:33.849 [2024-11-19T09:54:23.641Z] Total : 11414.68 44.59 0.00 0.00 11173.59 2090.91 16103.13 00:26:33.849 10:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:33.849 10:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:34.107 10:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:34.367 10:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:34.367 10:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:34.367 10:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:34.626 10:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 4030386 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4030386 ']' 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4030386 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4030386 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4030386' 00:26:37.915 killing process with pid 4030386 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4030386 00:26:37.915 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4030386 00:26:38.174 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:38.174 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.174 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:38.174 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:38.174 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:38.174 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:38.174 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:38.174 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:38.174 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:38.174 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.174 10:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:38.174 rmmod nvme_tcp 00:26:38.433 rmmod nvme_fabrics 00:26:38.433 rmmod nvme_keyring 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 4026861 ']' 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 4026861 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4026861 ']' 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4026861 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4026861 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4026861' 00:26:38.433 killing process with pid 4026861 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4026861 00:26:38.433 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4026861 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.696 10:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.644 10:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:40.644 00:26:40.644 real 0m37.872s 00:26:40.644 user 1m59.737s 00:26:40.644 sys 0m7.921s 00:26:40.644 10:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:40.644 10:54:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:40.644 ************************************ 00:26:40.644 END TEST nvmf_failover 00:26:40.644 ************************************ 00:26:40.644 10:54:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:40.644 10:54:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:40.644 10:54:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:40.644 10:54:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.644 ************************************ 00:26:40.644 START TEST nvmf_host_discovery 00:26:40.644 ************************************ 00:26:40.644 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:40.904 * Looking for test storage... 00:26:40.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:40.904 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:40.904 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:40.904 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:40.904 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:40.904 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.904 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:40.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.905 --rc genhtml_branch_coverage=1 00:26:40.905 --rc genhtml_function_coverage=1 00:26:40.905 --rc genhtml_legend=1 00:26:40.905 --rc geninfo_all_blocks=1 00:26:40.905 --rc geninfo_unexecuted_blocks=1 00:26:40.905 00:26:40.905 ' 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:40.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.905 --rc genhtml_branch_coverage=1 00:26:40.905 --rc genhtml_function_coverage=1 00:26:40.905 --rc genhtml_legend=1 00:26:40.905 --rc geninfo_all_blocks=1 00:26:40.905 --rc geninfo_unexecuted_blocks=1 00:26:40.905 00:26:40.905 ' 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:40.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.905 --rc genhtml_branch_coverage=1 00:26:40.905 --rc genhtml_function_coverage=1 00:26:40.905 --rc genhtml_legend=1 00:26:40.905 --rc geninfo_all_blocks=1 00:26:40.905 --rc geninfo_unexecuted_blocks=1 00:26:40.905 00:26:40.905 ' 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:40.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.905 --rc genhtml_branch_coverage=1 00:26:40.905 --rc genhtml_function_coverage=1 00:26:40.905 --rc genhtml_legend=1 00:26:40.905 --rc geninfo_all_blocks=1 00:26:40.905 --rc geninfo_unexecuted_blocks=1 00:26:40.905 00:26:40.905 ' 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:40.905 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:40.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:40.906 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:47.482 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:47.482 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:47.482 Found net devices under 0000:86:00.0: cvl_0_0 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:47.482 Found net devices under 0000:86:00.1: cvl_0_1 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:47.482 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:47.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:26:47.483 00:26:47.483 --- 10.0.0.2 ping statistics --- 00:26:47.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.483 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:26:47.483 00:26:47.483 --- 10.0.0.1 ping statistics --- 00:26:47.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.483 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=4035550 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 4035550 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4035550 ']' 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.483 [2024-11-19 10:54:36.622365] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:26:47.483 [2024-11-19 10:54:36.622409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.483 [2024-11-19 10:54:36.702437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.483 [2024-11-19 10:54:36.742722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.483 [2024-11-19 10:54:36.742755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.483 [2024-11-19 10:54:36.742762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.483 [2024-11-19 10:54:36.742768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.483 [2024-11-19 10:54:36.742773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.483 [2024-11-19 10:54:36.743315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.483 [2024-11-19 10:54:36.877757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.483 [2024-11-19 10:54:36.889938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.483 null0 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.483 null1 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4035666 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4035666 /tmp/host.sock 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4035666 ']' 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:47.483 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.483 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.483 [2024-11-19 10:54:36.965552] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:26:47.483 [2024-11-19 10:54:36.965595] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035666 ] 00:26:47.483 [2024-11-19 10:54:37.037971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.484 [2024-11-19 10:54:37.080234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:47.484 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:47.743 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.744 [2024-11-19 10:54:37.479425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:47.744 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:48.003 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:48.570 [2024-11-19 10:54:38.236379] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:48.570 [2024-11-19 10:54:38.236398] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:48.571 [2024-11-19 10:54:38.236410] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:48.571 [2024-11-19 10:54:38.322661] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:48.829 [2024-11-19 10:54:38.377257] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:48.829 [2024-11-19 10:54:38.377984] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10b2dd0:1 started. 00:26:48.829 [2024-11-19 10:54:38.379330] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:48.829 [2024-11-19 10:54:38.379345] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:48.829 [2024-11-19 10:54:38.384282] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10b2dd0 was disconnected and freed. delete nvme_qpair. 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.087 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.088 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:49.346 [2024-11-19 10:54:39.114966] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10b31a0:1 started. 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.604 [2024-11-19 10:54:39.157786] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10b31a0 was disconnected and freed. delete nvme_qpair. 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:49.604 10:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.541 [2024-11-19 10:54:40.246958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:50.541 [2024-11-19 10:54:40.247597] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:50.541 [2024-11-19 10:54:40.247620] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:50.541 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.801 [2024-11-19 10:54:40.334860] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:50.801 10:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:51.060 [2024-11-19 10:54:40.636389] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:51.060 [2024-11-19 10:54:40.636423] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:51.060 [2024-11-19 10:54:40.636431] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:51.060 [2024-11-19 10:54:40.636439] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:51.628 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.628 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:51.628 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:51.628 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:51.628 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:51.628 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.628 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:51.628 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.628 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.890 [2024-11-19 10:54:41.510591] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:51.890 [2024-11-19 10:54:41.510612] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:51.890 [2024-11-19 10:54:41.511943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.890 [2024-11-19 10:54:41.511959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.890 [2024-11-19 10:54:41.511967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.890 [2024-11-19 10:54:41.511973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.890 [2024-11-19 10:54:41.511980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.890 [2024-11-19 10:54:41.511987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.890 [2024-11-19 10:54:41.511993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.890 [2024-11-19 10:54:41.512000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.890 [2024-11-19 10:54:41.512006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083390 is same with the state(6) to be set 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:51.890 [2024-11-19 10:54:41.521957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1083390 (9): Bad file descriptor 00:26:51.890 [2024-11-19 10:54:41.531991] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.890 [2024-11-19 10:54:41.532003] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.890 [2024-11-19 10:54:41.532008] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.890 [2024-11-19 10:54:41.532012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.890 [2024-11-19 10:54:41.532028] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.890 [2024-11-19 10:54:41.532275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.890 [2024-11-19 10:54:41.532290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1083390 with addr=10.0.0.2, port=4420 00:26:51.890 [2024-11-19 10:54:41.532298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083390 is same with the state(6) to be set 00:26:51.890 [2024-11-19 10:54:41.532313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1083390 (9): Bad file descriptor 00:26:51.890 [2024-11-19 10:54:41.532323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.890 [2024-11-19 10:54:41.532330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.890 [2024-11-19 10:54:41.532337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.890 [2024-11-19 10:54:41.532343] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.890 [2024-11-19 10:54:41.532348] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.890 [2024-11-19 10:54:41.532353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.890 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.890 [2024-11-19 10:54:41.542058] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.890 [2024-11-19 10:54:41.542068] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.890 [2024-11-19 10:54:41.542072] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.890 [2024-11-19 10:54:41.542076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.890 [2024-11-19 10:54:41.542089] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.890 [2024-11-19 10:54:41.542352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.890 [2024-11-19 10:54:41.542364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1083390 with addr=10.0.0.2, port=4420 00:26:51.890 [2024-11-19 10:54:41.542371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083390 is same with the state(6) to be set 00:26:51.890 [2024-11-19 10:54:41.542381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1083390 (9): Bad file descriptor 00:26:51.890 [2024-11-19 10:54:41.542391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.890 [2024-11-19 10:54:41.542398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.890 [2024-11-19 10:54:41.542404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.891 [2024-11-19 10:54:41.542410] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.891 [2024-11-19 10:54:41.542415] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.891 [2024-11-19 10:54:41.542419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.891 [2024-11-19 10:54:41.552120] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.891 [2024-11-19 10:54:41.552130] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.891 [2024-11-19 10:54:41.552134] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.891 [2024-11-19 10:54:41.552138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.891 [2024-11-19 10:54:41.552150] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.891 [2024-11-19 10:54:41.552430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.891 [2024-11-19 10:54:41.552443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1083390 with addr=10.0.0.2, port=4420 00:26:51.891 [2024-11-19 10:54:41.552452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083390 is same with the state(6) to be set 00:26:51.891 [2024-11-19 10:54:41.552463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1083390 (9): Bad file descriptor 00:26:51.891 [2024-11-19 10:54:41.552472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.891 [2024-11-19 10:54:41.552478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.891 [2024-11-19 10:54:41.552484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.891 [2024-11-19 10:54:41.552489] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.891 [2024-11-19 10:54:41.552493] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.891 [2024-11-19 10:54:41.552497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.891 [2024-11-19 10:54:41.562182] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.891 [2024-11-19 10:54:41.562197] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.891 [2024-11-19 10:54:41.562205] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.891 [2024-11-19 10:54:41.562210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.891 [2024-11-19 10:54:41.562224] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.891 [2024-11-19 10:54:41.562395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.891 [2024-11-19 10:54:41.562409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1083390 with addr=10.0.0.2, port=4420 00:26:51.891 [2024-11-19 10:54:41.562415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083390 is same with the state(6) to be set 00:26:51.891 [2024-11-19 10:54:41.562425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1083390 (9): Bad file descriptor 00:26:51.891 [2024-11-19 10:54:41.562435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.891 [2024-11-19 10:54:41.562440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.891 [2024-11-19 10:54:41.562447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.891 [2024-11-19 10:54:41.562452] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.891 [2024-11-19 10:54:41.562457] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.891 [2024-11-19 10:54:41.562461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.891 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:51.891 [2024-11-19 10:54:41.572255] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.891 [2024-11-19 10:54:41.572270] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.891 [2024-11-19 10:54:41.572275] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.891 [2024-11-19 10:54:41.572279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.891 [2024-11-19 10:54:41.572293] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.891 [2024-11-19 10:54:41.572463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.891 [2024-11-19 10:54:41.572482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1083390 with addr=10.0.0.2, port=4420 00:26:51.891 [2024-11-19 10:54:41.572489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083390 is same with the state(6) to be set 00:26:51.891 [2024-11-19 10:54:41.572499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1083390 (9): Bad file descriptor 00:26:51.891 [2024-11-19 10:54:41.572508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.891 [2024-11-19 10:54:41.572514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.891 [2024-11-19 10:54:41.572521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.891 [2024-11-19 10:54:41.572527] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.891 [2024-11-19 10:54:41.572531] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.891 [2024-11-19 10:54:41.572535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.891 [2024-11-19 10:54:41.582323] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.891 [2024-11-19 10:54:41.582333] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.891 [2024-11-19 10:54:41.582337] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.891 [2024-11-19 10:54:41.582341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.891 [2024-11-19 10:54:41.582354] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.891 [2024-11-19 10:54:41.582530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.891 [2024-11-19 10:54:41.582548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1083390 with addr=10.0.0.2, port=4420 00:26:51.891 [2024-11-19 10:54:41.582555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083390 is same with the state(6) to be set 00:26:51.892 [2024-11-19 10:54:41.582565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1083390 (9): Bad file descriptor 00:26:51.892 [2024-11-19 10:54:41.582581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.892 [2024-11-19 10:54:41.582587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.892 [2024-11-19 10:54:41.582593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.892 [2024-11-19 10:54:41.582599] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.892 [2024-11-19 10:54:41.582603] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.892 [2024-11-19 10:54:41.582607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.892 [2024-11-19 10:54:41.592384] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.892 [2024-11-19 10:54:41.592394] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.892 [2024-11-19 10:54:41.592398] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.892 [2024-11-19 10:54:41.592401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.892 [2024-11-19 10:54:41.592412] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.892 [2024-11-19 10:54:41.592660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.892 [2024-11-19 10:54:41.592672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1083390 with addr=10.0.0.2, port=4420 00:26:51.892 [2024-11-19 10:54:41.592679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083390 is same with the state(6) to be set 00:26:51.892 [2024-11-19 10:54:41.592689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1083390 (9): Bad file descriptor 00:26:51.892 [2024-11-19 10:54:41.592698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.892 [2024-11-19 10:54:41.592704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.892 [2024-11-19 10:54:41.592710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.892 [2024-11-19 10:54:41.592716] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.892 [2024-11-19 10:54:41.592720] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.892 [2024-11-19 10:54:41.592724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.892 [2024-11-19 10:54:41.596860] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:51.892 [2024-11-19 10:54:41.596873] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:51.892 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.152 10:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.531 [2024-11-19 10:54:42.907345] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:53.532 [2024-11-19 10:54:42.907361] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:53.532 [2024-11-19 10:54:42.907372] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:53.532 [2024-11-19 10:54:42.993620] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:53.532 [2024-11-19 10:54:43.173526] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:53.532 [2024-11-19 10:54:43.174133] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x10acba0:1 started. 00:26:53.532 [2024-11-19 10:54:43.175686] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:53.532 [2024-11-19 10:54:43.175709] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.532 [2024-11-19 10:54:43.176865] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x10acba0 was disconnected and freed. delete nvme_qpair. 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.532 request: 00:26:53.532 { 00:26:53.532 "name": "nvme", 00:26:53.532 "trtype": "tcp", 00:26:53.532 "traddr": "10.0.0.2", 00:26:53.532 "adrfam": "ipv4", 00:26:53.532 "trsvcid": "8009", 00:26:53.532 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:53.532 "wait_for_attach": true, 00:26:53.532 "method": "bdev_nvme_start_discovery", 00:26:53.532 "req_id": 1 00:26:53.532 } 00:26:53.532 Got JSON-RPC error response 00:26:53.532 response: 00:26:53.532 { 00:26:53.532 "code": -17, 00:26:53.532 "message": "File exists" 00:26:53.532 } 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.532 request: 00:26:53.532 { 00:26:53.532 "name": "nvme_second", 00:26:53.532 "trtype": "tcp", 00:26:53.532 "traddr": "10.0.0.2", 00:26:53.532 "adrfam": "ipv4", 00:26:53.532 "trsvcid": "8009", 00:26:53.532 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:53.532 "wait_for_attach": true, 00:26:53.532 "method": "bdev_nvme_start_discovery", 00:26:53.532 "req_id": 1 00:26:53.532 } 00:26:53.532 Got JSON-RPC error response 00:26:53.532 response: 00:26:53.532 { 00:26:53.532 "code": -17, 00:26:53.532 "message": "File exists" 00:26:53.532 } 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.532 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.792 10:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.729 [2024-11-19 10:54:44.411050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-11-19 10:54:44.411075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109b160 with addr=10.0.0.2, port=8010 00:26:54.729 [2024-11-19 10:54:44.411088] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:54.729 [2024-11-19 10:54:44.411115] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:54.729 [2024-11-19 10:54:44.411123] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:55.666 [2024-11-19 10:54:45.413428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-11-19 10:54:45.413451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109b160 with addr=10.0.0.2, port=8010 00:26:55.666 [2024-11-19 10:54:45.413463] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:55.666 [2024-11-19 10:54:45.413469] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:55.666 [2024-11-19 10:54:45.413491] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:57.043 [2024-11-19 10:54:46.415732] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:57.043 request: 00:26:57.043 { 00:26:57.043 "name": "nvme_second", 00:26:57.043 "trtype": "tcp", 00:26:57.043 "traddr": "10.0.0.2", 00:26:57.043 "adrfam": "ipv4", 00:26:57.043 "trsvcid": "8010", 00:26:57.043 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:57.043 "wait_for_attach": false, 00:26:57.043 "attach_timeout_ms": 3000, 00:26:57.043 "method": "bdev_nvme_start_discovery", 00:26:57.043 "req_id": 1 00:26:57.043 } 00:26:57.043 Got JSON-RPC error response 00:26:57.043 response: 00:26:57.043 { 00:26:57.043 "code": -110, 00:26:57.043 "message": "Connection timed out" 00:26:57.043 } 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4035666 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:57.043 rmmod nvme_tcp 00:26:57.043 rmmod nvme_fabrics 00:26:57.043 rmmod nvme_keyring 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 4035550 ']' 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 4035550 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 4035550 ']' 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 4035550 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4035550 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4035550' 00:26:57.043 killing process with pid 4035550 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 4035550 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 4035550 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.043 10:54:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.594 10:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:59.594 00:26:59.594 real 0m18.421s 00:26:59.594 user 0m22.713s 00:26:59.594 sys 0m6.006s 00:26:59.594 10:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:59.595 10:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.595 ************************************ 00:26:59.595 END TEST nvmf_host_discovery 00:26:59.595 ************************************ 00:26:59.595 10:54:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:59.595 10:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:59.595 10:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:59.595 10:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.595 ************************************ 00:26:59.595 START TEST nvmf_host_multipath_status 00:26:59.595 ************************************ 00:26:59.595 10:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:59.595 * Looking for test storage... 00:26:59.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:59.595 10:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:59.595 10:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:59.595 10:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.595 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:59.596 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:59.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.596 --rc genhtml_branch_coverage=1 00:26:59.596 --rc genhtml_function_coverage=1 00:26:59.596 --rc genhtml_legend=1 00:26:59.596 --rc geninfo_all_blocks=1 00:26:59.597 --rc geninfo_unexecuted_blocks=1 00:26:59.597 00:26:59.597 ' 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:59.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.597 --rc genhtml_branch_coverage=1 00:26:59.597 --rc genhtml_function_coverage=1 00:26:59.597 --rc genhtml_legend=1 00:26:59.597 --rc geninfo_all_blocks=1 00:26:59.597 --rc geninfo_unexecuted_blocks=1 00:26:59.597 00:26:59.597 ' 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:59.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.597 --rc genhtml_branch_coverage=1 00:26:59.597 --rc genhtml_function_coverage=1 00:26:59.597 --rc genhtml_legend=1 00:26:59.597 --rc geninfo_all_blocks=1 00:26:59.597 --rc geninfo_unexecuted_blocks=1 00:26:59.597 00:26:59.597 ' 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:59.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.597 --rc genhtml_branch_coverage=1 00:26:59.597 --rc genhtml_function_coverage=1 00:26:59.597 --rc genhtml_legend=1 00:26:59.597 --rc geninfo_all_blocks=1 00:26:59.597 --rc geninfo_unexecuted_blocks=1 00:26:59.597 00:26:59.597 ' 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.597 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:59.598 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:59.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:59.599 10:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:06.179 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:06.180 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:06.180 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:06.180 Found net devices under 0000:86:00.0: cvl_0_0 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:06.180 Found net devices under 0000:86:00.1: cvl_0_1 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.180 10:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:06.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:27:06.180 00:27:06.180 --- 10.0.0.2 ping statistics --- 00:27:06.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.180 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:27:06.180 00:27:06.180 --- 10.0.0.1 ping statistics --- 00:27:06.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.180 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:06.180 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=4040875 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 4040875 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4040875 ']' 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.181 [2024-11-19 10:54:55.116837] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:06.181 [2024-11-19 10:54:55.116881] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.181 [2024-11-19 10:54:55.195395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:06.181 [2024-11-19 10:54:55.237829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.181 [2024-11-19 10:54:55.237864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.181 [2024-11-19 10:54:55.237872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.181 [2024-11-19 10:54:55.237878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.181 [2024-11-19 10:54:55.237883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.181 [2024-11-19 10:54:55.242236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.181 [2024-11-19 10:54:55.242240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.181 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.440 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.440 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4040875 00:27:06.440 10:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:06.440 [2024-11-19 10:54:56.155411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.440 10:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:06.699 Malloc0 00:27:06.699 10:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:06.957 10:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:07.216 10:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.216 [2024-11-19 10:54:56.967518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.216 10:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:07.475 [2024-11-19 10:54:57.180028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:07.475 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4041330 00:27:07.475 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:07.475 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:07.475 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4041330 /var/tmp/bdevperf.sock 00:27:07.475 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4041330 ']' 00:27:07.475 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:07.475 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:07.475 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:07.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:07.475 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:07.475 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:07.734 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.734 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:07.734 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:07.992 10:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:08.251 Nvme0n1 00:27:08.251 10:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:08.820 Nvme0n1 00:27:08.820 10:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:08.820 10:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:10.726 10:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:10.726 10:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:10.985 10:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:11.244 10:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:12.182 10:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:12.182 10:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:12.182 10:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.182 10:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:12.441 10:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.441 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:12.441 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.441 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:12.441 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:12.441 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:12.441 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.441 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:12.701 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.701 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:12.701 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.701 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:12.960 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.960 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:12.960 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.960 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:13.220 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.220 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:13.220 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.220 10:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:13.479 10:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.479 10:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:13.479 10:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:13.737 10:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:13.737 10:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:15.115 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:15.115 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:15.115 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.115 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:15.115 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.115 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:15.115 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:15.115 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.374 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.374 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:15.374 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.374 10:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:15.374 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.374 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:15.374 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.374 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.633 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.633 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:15.633 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.633 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:15.893 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.893 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:15.893 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.893 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.152 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.152 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:16.152 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:16.411 10:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:16.411 10:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.799 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.064 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.064 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.064 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:18.064 10:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.323 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.323 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:18.323 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.323 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:18.582 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.582 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:18.582 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.582 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:18.841 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.841 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:18.841 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:19.100 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:19.100 10:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:20.481 10:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:20.481 10:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:20.481 10:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.481 10:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:20.481 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.481 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:20.481 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.481 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:20.738 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.738 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:20.738 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.738 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:20.738 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.739 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:20.739 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.739 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:20.997 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.997 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:20.997 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:20.997 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.256 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.256 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:21.256 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.256 10:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:21.516 10:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:21.516 10:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:21.516 10:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:21.775 10:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:21.775 10:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:23.151 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:23.151 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:23.151 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.151 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:23.151 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.151 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:23.151 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.151 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:23.410 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.410 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:23.410 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.410 10:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:23.410 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.410 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:23.410 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:23.410 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.668 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.668 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:23.668 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.668 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:23.927 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.927 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:23.927 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:23.927 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.185 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:24.185 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:24.185 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:24.185 10:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:24.443 10:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:25.379 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:25.379 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:25.379 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.379 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:25.637 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.637 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:25.637 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.637 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:25.896 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.896 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:25.896 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.896 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:26.154 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.154 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:26.154 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.154 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:26.414 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.414 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:26.414 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.414 10:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:26.414 10:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:26.414 10:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:26.414 10:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.414 10:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:26.672 10:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.672 10:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:26.932 10:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:26.932 10:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:27.189 10:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:27.445 10:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:28.406 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:28.406 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:28.406 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:28.406 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.684 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.684 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:28.684 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.684 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:28.684 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.684 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:28.684 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.684 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:28.976 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.976 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:28.976 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.976 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:29.252 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.252 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:29.252 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.252 10:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:29.513 10:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.513 10:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:29.513 10:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.513 10:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:29.770 10:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.770 10:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:29.770 10:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:29.770 10:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:30.029 10:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:30.965 10:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:30.965 10:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:30.965 10:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.965 10:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:31.225 10:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:31.225 10:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:31.225 10:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.225 10:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:31.485 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.485 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:31.485 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.485 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:31.745 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.745 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:31.745 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:31.745 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.004 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.004 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:32.004 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.004 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:32.263 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.263 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:32.263 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.263 10:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:32.263 10:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.263 10:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:32.263 10:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:32.523 10:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:32.782 10:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:33.720 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:33.720 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:33.720 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.720 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:33.979 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.979 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:33.979 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.979 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:34.238 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.238 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:34.238 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.238 10:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:34.497 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.497 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:34.497 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.497 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:34.756 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.756 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:34.756 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.756 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:34.756 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.756 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:34.756 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.756 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:35.014 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.014 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:35.014 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:35.272 10:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:35.532 10:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:36.470 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:36.470 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:36.470 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.470 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:36.729 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.729 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:36.729 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.729 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:36.989 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:36.989 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:36.989 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.989 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:37.249 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.249 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:37.249 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.249 10:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:37.249 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.249 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:37.249 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.249 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:37.508 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.508 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:37.508 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.508 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4041330 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4041330 ']' 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4041330 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4041330 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4041330' 00:27:37.767 killing process with pid 4041330 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4041330 00:27:37.767 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4041330 00:27:37.767 { 00:27:37.767 "results": [ 00:27:37.767 { 00:27:37.767 "job": "Nvme0n1", 00:27:37.767 "core_mask": "0x4", 00:27:37.767 "workload": "verify", 00:27:37.767 "status": "terminated", 00:27:37.767 "verify_range": { 00:27:37.767 "start": 0, 00:27:37.767 "length": 16384 00:27:37.767 }, 00:27:37.767 "queue_depth": 128, 00:27:37.767 "io_size": 4096, 00:27:37.767 "runtime": 28.984143, 00:27:37.767 "iops": 10644.924019316355, 00:27:37.767 "mibps": 41.58173445045451, 00:27:37.767 "io_failed": 0, 00:27:37.767 "io_timeout": 0, 00:27:37.767 "avg_latency_us": 12004.716218084479, 00:27:37.767 "min_latency_us": 493.4704761904762, 00:27:37.767 "max_latency_us": 3083812.083809524 00:27:37.767 } 00:27:37.767 ], 00:27:37.767 "core_count": 1 00:27:37.767 } 00:27:38.058 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4041330 00:27:38.058 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:38.058 [2024-11-19 10:54:57.253390] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:38.058 [2024-11-19 10:54:57.253447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041330 ] 00:27:38.058 [2024-11-19 10:54:57.329412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.058 [2024-11-19 10:54:57.369789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.059 Running I/O for 90 seconds... 00:27:38.059 11523.00 IOPS, 45.01 MiB/s [2024-11-19T09:55:27.851Z] 11459.50 IOPS, 44.76 MiB/s [2024-11-19T09:55:27.851Z] 11482.33 IOPS, 44.85 MiB/s [2024-11-19T09:55:27.851Z] 11508.50 IOPS, 44.96 MiB/s [2024-11-19T09:55:27.851Z] 11531.20 IOPS, 45.04 MiB/s [2024-11-19T09:55:27.851Z] 11511.00 IOPS, 44.96 MiB/s [2024-11-19T09:55:27.851Z] 11485.71 IOPS, 44.87 MiB/s [2024-11-19T09:55:27.851Z] 11487.12 IOPS, 44.87 MiB/s [2024-11-19T09:55:27.851Z] 11484.67 IOPS, 44.86 MiB/s [2024-11-19T09:55:27.851Z] 11498.10 IOPS, 44.91 MiB/s [2024-11-19T09:55:27.851Z] 11509.00 IOPS, 44.96 MiB/s [2024-11-19T09:55:27.851Z] 11496.83 IOPS, 44.91 MiB/s [2024-11-19T09:55:27.851Z] [2024-11-19 10:55:11.300185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.059 [2024-11-19 10:55:11.300375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.059 [2024-11-19 10:55:11.300394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.059 [2024-11-19 10:55:11.300421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.300982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.300989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.301001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.301008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.301020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.301027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.301039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.301046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.301058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.301065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.301077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.301084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.301096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.301102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.301116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.301123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.301135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.301142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.301154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.301161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.301175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.059 [2024-11-19 10:55:11.301182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.059 [2024-11-19 10:55:11.301194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.301959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.301966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.302186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.302197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.302217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.302224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.302236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.302243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.302256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.302262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.302274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.302281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.302293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.302300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.302314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.302321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.302333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.302340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.060 [2024-11-19 10:55:11.302352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.060 [2024-11-19 10:55:11.302358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.302984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.302997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.303004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.303233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.303253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.303273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.303291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.303310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.303329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.303348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.061 [2024-11-19 10:55:11.303367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.061 [2024-11-19 10:55:11.303386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.061 [2024-11-19 10:55:11.303405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.061 [2024-11-19 10:55:11.303425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.061 [2024-11-19 10:55:11.303450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.061 [2024-11-19 10:55:11.303469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.061 [2024-11-19 10:55:11.303488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.061 [2024-11-19 10:55:11.303508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.061 [2024-11-19 10:55:11.303520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.303959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.303978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.303992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.303998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.304113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.304131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.062 [2024-11-19 10:55:11.304151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.062 [2024-11-19 10:55:11.304572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.062 [2024-11-19 10:55:11.304583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.304812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.304819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.305378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.305385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.315580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.315595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.063 [2024-11-19 10:55:11.315608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.063 [2024-11-19 10:55:11.315615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.315627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.315634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.315645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.315652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.315664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.315671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.315683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.315690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.315938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.315950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.315965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.315972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.315984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.315991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.064 [2024-11-19 10:55:11.316593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.064 [2024-11-19 10:55:11.316600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.316618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.316638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.316657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.316675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.316695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.316713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.316732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.316751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.316771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.316791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.316810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.316828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.316847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.316866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.316885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.316903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.316922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.316940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.316958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.316978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.316990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.316996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.065 [2024-11-19 10:55:11.317247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.317266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.317284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.317303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.065 [2024-11-19 10:55:11.317322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.065 [2024-11-19 10:55:11.317334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.066 [2024-11-19 10:55:11.317397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.066 [2024-11-19 10:55:11.317415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.066 [2024-11-19 10:55:11.317434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.317879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.317886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.318696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.318714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.318729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.318742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.318754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.318761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.318773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.318780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.318792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.318798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.318810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.318817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.318829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.318836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.318848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.318854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.318866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.066 [2024-11-19 10:55:11.318873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.066 [2024-11-19 10:55:11.318885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.318894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.318906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.318913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.318925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.318932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.318944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.318950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.318962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.318969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.318983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.318990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.319982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.319998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.067 [2024-11-19 10:55:11.320573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.067 [2024-11-19 10:55:11.320589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.320977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.320986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.321002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.321011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.321027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.321036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.321053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.321062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.321078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.321087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.321105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.321117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.327521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.327546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.327572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.327597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.327622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.327647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.068 [2024-11-19 10:55:11.327672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.068 [2024-11-19 10:55:11.327698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.068 [2024-11-19 10:55:11.327723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.068 [2024-11-19 10:55:11.327749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.068 [2024-11-19 10:55:11.327774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.068 [2024-11-19 10:55:11.327802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.068 [2024-11-19 10:55:11.327828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.068 [2024-11-19 10:55:11.327853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.068 [2024-11-19 10:55:11.327879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.068 [2024-11-19 10:55:11.327904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.068 [2024-11-19 10:55:11.327929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.068 [2024-11-19 10:55:11.327945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.327954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.327970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.327979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.327996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.328005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.328036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.328064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.328097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.328125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.328155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.328182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.328212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.328237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.328263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.328293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.328906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.328934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.328959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.328975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.328984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.329089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.329127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.069 [2024-11-19 10:55:11.329163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.069 [2024-11-19 10:55:11.329744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.069 [2024-11-19 10:55:11.329757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.329779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.329792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.329815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.329827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.329849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.329863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.329885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.329900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.329922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.329937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.329959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.329972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.329995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.330971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.330984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.331007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.331019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.331041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.331055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.331077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.331090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.331114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.331129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.070 [2024-11-19 10:55:11.331151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.070 [2024-11-19 10:55:11.331164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.331186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.331199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.331228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.331240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.331263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.331278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.331301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.331314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.331336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.331349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.331372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.331384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.331407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.331419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.331441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.331454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.331478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.331490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.332616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.332639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.332666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.332681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.332705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.332721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.332743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.332758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.332780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.332794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.332819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.332831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.332861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.332875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.332900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.332913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.332935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.332949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.332973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.332987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.071 [2024-11-19 10:55:11.333675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.071 [2024-11-19 10:55:11.333697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.072 [2024-11-19 10:55:11.333712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.333738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.333756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.333781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.333795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.333818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.333832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.333854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.333867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.333890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.333902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.333924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.333937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.333962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.333975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.333997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.072 [2024-11-19 10:55:11.334010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.072 [2024-11-19 10:55:11.334614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.072 [2024-11-19 10:55:11.334652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.072 [2024-11-19 10:55:11.334688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.072 [2024-11-19 10:55:11.334722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.072 [2024-11-19 10:55:11.334757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.072 [2024-11-19 10:55:11.334791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.072 [2024-11-19 10:55:11.334826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.072 [2024-11-19 10:55:11.334919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.072 [2024-11-19 10:55:11.334931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.335861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.335882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.335907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.335919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.335942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.335954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.335980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.335993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.336966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.336978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.337001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.337013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.337035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.337047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.337069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.337081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.337104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.337116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.337138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.337150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.337173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.337186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.073 [2024-11-19 10:55:11.337212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.073 [2024-11-19 10:55:11.337225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.337965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.337987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.338000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.338022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.338034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.338057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.338069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.074 [2024-11-19 10:55:11.339403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.074 [2024-11-19 10:55:11.339420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.339815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.339837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.339860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.339883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.339906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.339928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.339951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.075 [2024-11-19 10:55:11.339976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.339990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.339999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.075 [2024-11-19 10:55:11.340305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.075 [2024-11-19 10:55:11.340319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.076 [2024-11-19 10:55:11.340327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.076 [2024-11-19 10:55:11.340350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.076 [2024-11-19 10:55:11.340530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.076 [2024-11-19 10:55:11.340555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.076 [2024-11-19 10:55:11.340577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.340912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.340921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.341695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.341712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.341728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.341737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.341752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.341760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.341774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.341782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.341797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.341805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.341819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.076 [2024-11-19 10:55:11.341827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.076 [2024-11-19 10:55:11.341842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.341850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.341867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.341876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.341890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.341898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.341913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.341921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.341936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.341944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.341959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.341967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.341982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.341990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.077 [2024-11-19 10:55:11.342724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.077 [2024-11-19 10:55:11.342738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.342746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.342761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.342769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.078 [2024-11-19 10:55:11.343979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.343994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.078 [2024-11-19 10:55:11.344002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.344017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.078 [2024-11-19 10:55:11.344025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.344040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.078 [2024-11-19 10:55:11.344049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.344063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.078 [2024-11-19 10:55:11.344071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.344086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.078 [2024-11-19 10:55:11.344094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.078 [2024-11-19 10:55:11.344109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.344162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.344538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.344562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.344587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.344613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.344637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.344659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.344674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.344682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.345284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.345309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.345332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.079 [2024-11-19 10:55:11.345354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.345377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.345400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.345422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.345444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.345467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.345493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.345516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.345538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.079 [2024-11-19 10:55:11.345561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.079 [2024-11-19 10:55:11.345575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.345984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.345992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.080 [2024-11-19 10:55:11.346447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.080 [2024-11-19 10:55:11.346456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.346745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.346753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.081 [2024-11-19 10:55:11.347893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.081 [2024-11-19 10:55:11.347908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.347916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.347930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.347938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.347953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.347961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.347976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.347984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.347998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.082 [2024-11-19 10:55:11.348732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.082 [2024-11-19 10:55:11.348801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.082 [2024-11-19 10:55:11.348815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.348824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.348839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.348848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.348862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.348870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.348884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.348893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.348907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.083 [2024-11-19 10:55:11.348915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.348930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.083 [2024-11-19 10:55:11.348938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.348952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.083 [2024-11-19 10:55:11.348962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.348977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.348985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.348999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.349989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.349995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.350007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.350014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.350028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.350034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.350046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.350053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.350065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.350072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.350084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.350090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.350102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.350109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.350121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.350127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.350140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.350146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.083 [2024-11-19 10:55:11.350158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.083 [2024-11-19 10:55:11.350165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.350699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.350706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.351171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.351183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.351199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.351213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.351225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.351232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.351244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.351251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.351263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.351270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.351282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.351288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.351300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.351307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.351319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.351326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.351338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.351345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.351357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.351364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.084 [2024-11-19 10:55:11.351376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.084 [2024-11-19 10:55:11.351383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.085 [2024-11-19 10:55:11.351843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.085 [2024-11-19 10:55:11.351862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.085 [2024-11-19 10:55:11.351881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.085 [2024-11-19 10:55:11.351900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.085 [2024-11-19 10:55:11.351920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.085 [2024-11-19 10:55:11.351939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.085 [2024-11-19 10:55:11.351958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.085 [2024-11-19 10:55:11.351976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.351989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.085 [2024-11-19 10:55:11.351995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.352007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.085 [2024-11-19 10:55:11.352014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.352026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.085 [2024-11-19 10:55:11.352033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.085 [2024-11-19 10:55:11.352045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.352302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.352320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.352339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.352358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.352885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.352906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.352925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.086 [2024-11-19 10:55:11.352981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.352993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.086 [2024-11-19 10:55:11.353280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.086 [2024-11-19 10:55:11.353288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.353829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.353836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.354268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.354281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.354294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.354301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.354313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.354321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.354333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.354340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.354352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.354359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.354371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.354378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.354390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.354397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.354409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.354416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.354428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.354435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.087 [2024-11-19 10:55:11.354447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.087 [2024-11-19 10:55:11.354454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.354987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.354999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.355006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.355018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.355025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.355370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.355381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.355395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.355402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.355414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.355421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.355433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.355439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.355451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.355458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.355470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.355477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.355489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.355496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.355507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.355514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.088 [2024-11-19 10:55:11.355528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.088 [2024-11-19 10:55:11.355535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.089 [2024-11-19 10:55:11.355687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.355990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.355997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.089 [2024-11-19 10:55:11.356016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.089 [2024-11-19 10:55:11.356034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.089 [2024-11-19 10:55:11.356053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.089 [2024-11-19 10:55:11.356072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.089 [2024-11-19 10:55:11.356090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.089 [2024-11-19 10:55:11.356109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.089 [2024-11-19 10:55:11.356416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.356437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.356456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.089 [2024-11-19 10:55:11.356475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.089 [2024-11-19 10:55:11.356495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.089 [2024-11-19 10:55:11.356516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.089 [2024-11-19 10:55:11.356534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.089 [2024-11-19 10:55:11.356547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.356984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.356992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.357011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.357029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.357048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.357067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.357402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.357423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.357442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.357461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.357482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.357501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.090 [2024-11-19 10:55:11.357520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.090 [2024-11-19 10:55:11.357534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.357991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.357998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.091 [2024-11-19 10:55:11.358547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.091 [2024-11-19 10:55:11.358559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.358981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.358993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.359000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.359019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.359038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.359057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.359075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.092 [2024-11-19 10:55:11.359239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:55:11.359502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:55:11.359509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.093 [2024-11-19 10:55:11.359527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.093 [2024-11-19 10:55:11.359546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.359789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.359809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.359828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.359846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.359865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.359884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.359904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.093 [2024-11-19 10:55:11.359923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.093 [2024-11-19 10:55:11.359942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.093 [2024-11-19 10:55:11.359961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.359980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.359991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.359998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.093 [2024-11-19 10:55:11.360694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.093 [2024-11-19 10:55:11.360700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.360959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.360966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.094 [2024-11-19 10:55:11.361564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.094 [2024-11-19 10:55:11.361571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.361583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.361590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.361602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.361608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.361620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.361627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.361639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.361646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.361872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.361883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.361895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.361903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.361914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.361921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.361933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.361940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.361952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.361959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.361973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.361981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.361994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.095 [2024-11-19 10:55:11.362772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.095 [2024-11-19 10:55:11.362796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.095 [2024-11-19 10:55:11.362816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.095 [2024-11-19 10:55:11.362836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.095 [2024-11-19 10:55:11.362855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.095 [2024-11-19 10:55:11.362874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.095 [2024-11-19 10:55:11.362887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.095 [2024-11-19 10:55:11.362893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.362906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.362913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.362925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.362932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.362944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.362951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.362964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.362970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.362983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.362990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.096 [2024-11-19 10:55:11.363542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.096 [2024-11-19 10:55:11.363803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.096 [2024-11-19 10:55:11.363817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.363824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.363838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.363845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.363859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.363866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.363880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.363887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.363901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.363908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.363923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.363931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.363946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.363953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.363968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.363975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.363989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.363996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.097 [2024-11-19 10:55:11.364792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.097 [2024-11-19 10:55:11.364808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.364815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.364832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.364838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.364878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.364887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.364904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.364913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.364930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.364936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.364953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.364960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.364977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.364984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:11.365733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:11.365740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.098 11338.46 IOPS, 44.29 MiB/s [2024-11-19T09:55:27.890Z] 10528.57 IOPS, 41.13 MiB/s [2024-11-19T09:55:27.890Z] 9826.67 IOPS, 38.39 MiB/s [2024-11-19T09:55:27.890Z] 9288.38 IOPS, 36.28 MiB/s [2024-11-19T09:55:27.890Z] 9412.35 IOPS, 36.77 MiB/s [2024-11-19T09:55:27.890Z] 9522.33 IOPS, 37.20 MiB/s [2024-11-19T09:55:27.890Z] 9695.58 IOPS, 37.87 MiB/s [2024-11-19T09:55:27.890Z] 9899.50 IOPS, 38.67 MiB/s [2024-11-19T09:55:27.890Z] 10084.76 IOPS, 39.39 MiB/s [2024-11-19T09:55:27.890Z] 10154.14 IOPS, 39.66 MiB/s [2024-11-19T09:55:27.890Z] 10202.91 IOPS, 39.86 MiB/s [2024-11-19T09:55:27.890Z] 10250.50 IOPS, 40.04 MiB/s [2024-11-19T09:55:27.890Z] 10385.08 IOPS, 40.57 MiB/s [2024-11-19T09:55:27.890Z] 10507.35 IOPS, 41.04 MiB/s [2024-11-19T09:55:27.890Z] [2024-11-19 10:55:25.118847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:25.118886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:25.118919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:25.118927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:25.118941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:25.118948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:25.118960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.098 [2024-11-19 10:55:25.118967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.098 [2024-11-19 10:55:25.118979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.118987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.099 [2024-11-19 10:55:25.119305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.099 [2024-11-19 10:55:25.119324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.099 [2024-11-19 10:55:25.119343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.099 [2024-11-19 10:55:25.119362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.099 [2024-11-19 10:55:25.119455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.099 [2024-11-19 10:55:25.119474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.099 [2024-11-19 10:55:25.119493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.119505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.119514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.120430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.120449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.120466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.120474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.120486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.120493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.120505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.120512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.120524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.120531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.120543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.120550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.120562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.120569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.120581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.120588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.120600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.099 [2024-11-19 10:55:25.120607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.099 [2024-11-19 10:55:25.120619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.120638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.120656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.120679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.120697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.120718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.120736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.120755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.120775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.120794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.120813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.120832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.120839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.121243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.121261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.121281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.121299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.100 [2024-11-19 10:55:25.121318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.100 [2024-11-19 10:55:25.121427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.100 [2024-11-19 10:55:25.121434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.101 [2024-11-19 10:55:25.121446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.101 [2024-11-19 10:55:25.121453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.101 [2024-11-19 10:55:25.121465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.101 [2024-11-19 10:55:25.121472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.101 [2024-11-19 10:55:25.121483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.101 [2024-11-19 10:55:25.121490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.101 [2024-11-19 10:55:25.121502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.101 [2024-11-19 10:55:25.121509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.101 [2024-11-19 10:55:25.121521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.101 [2024-11-19 10:55:25.121527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.101 [2024-11-19 10:55:25.121539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.101 [2024-11-19 10:55:25.121546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.101 10587.26 IOPS, 41.36 MiB/s [2024-11-19T09:55:27.893Z] 10621.57 IOPS, 41.49 MiB/s [2024-11-19T09:55:27.893Z] Received shutdown signal, test time was about 28.984774 seconds 00:27:38.101 00:27:38.101 Latency(us) 00:27:38.101 [2024-11-19T09:55:27.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.101 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:38.101 Verification LBA range: start 0x0 length 0x4000 00:27:38.101 Nvme0n1 : 28.98 10644.92 41.58 0.00 0.00 12004.72 493.47 3083812.08 00:27:38.101 [2024-11-19T09:55:27.893Z] =================================================================================================================== 00:27:38.101 [2024-11-19T09:55:27.893Z] Total : 10644.92 41.58 0.00 0.00 12004.72 493.47 3083812.08 00:27:38.101 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:38.360 rmmod nvme_tcp 00:27:38.360 rmmod nvme_fabrics 00:27:38.360 rmmod nvme_keyring 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 4040875 ']' 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 4040875 00:27:38.360 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4040875 ']' 00:27:38.361 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4040875 00:27:38.361 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:38.361 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.361 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4040875 00:27:38.361 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.361 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.361 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4040875' 00:27:38.361 killing process with pid 4040875 00:27:38.361 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4040875 00:27:38.361 10:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4040875 00:27:38.361 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:38.361 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:38.620 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:38.620 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:38.620 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:38.620 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:38.620 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:38.620 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:38.620 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:38.620 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.620 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.620 10:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.527 10:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:40.527 00:27:40.527 real 0m41.332s 00:27:40.527 user 1m51.734s 00:27:40.527 sys 0m11.728s 00:27:40.527 10:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.527 10:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:40.527 ************************************ 00:27:40.527 END TEST nvmf_host_multipath_status 00:27:40.527 ************************************ 00:27:40.527 10:55:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:40.527 10:55:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:40.527 10:55:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.527 10:55:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.527 ************************************ 00:27:40.527 START TEST nvmf_discovery_remove_ifc 00:27:40.527 ************************************ 00:27:40.527 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:40.787 * Looking for test storage... 00:27:40.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:40.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.787 --rc genhtml_branch_coverage=1 00:27:40.787 --rc genhtml_function_coverage=1 00:27:40.787 --rc genhtml_legend=1 00:27:40.787 --rc geninfo_all_blocks=1 00:27:40.787 --rc geninfo_unexecuted_blocks=1 00:27:40.787 00:27:40.787 ' 00:27:40.787 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:40.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.788 --rc genhtml_branch_coverage=1 00:27:40.788 --rc genhtml_function_coverage=1 00:27:40.788 --rc genhtml_legend=1 00:27:40.788 --rc geninfo_all_blocks=1 00:27:40.788 --rc geninfo_unexecuted_blocks=1 00:27:40.788 00:27:40.788 ' 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:40.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.788 --rc genhtml_branch_coverage=1 00:27:40.788 --rc genhtml_function_coverage=1 00:27:40.788 --rc genhtml_legend=1 00:27:40.788 --rc geninfo_all_blocks=1 00:27:40.788 --rc geninfo_unexecuted_blocks=1 00:27:40.788 00:27:40.788 ' 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:40.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.788 --rc genhtml_branch_coverage=1 00:27:40.788 --rc genhtml_function_coverage=1 00:27:40.788 --rc genhtml_legend=1 00:27:40.788 --rc geninfo_all_blocks=1 00:27:40.788 --rc geninfo_unexecuted_blocks=1 00:27:40.788 00:27:40.788 ' 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:40.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:40.788 10:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:47.361 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:47.361 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.361 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:47.362 Found net devices under 0000:86:00.0: cvl_0_0 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:47.362 Found net devices under 0000:86:00.1: cvl_0_1 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:47.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:27:47.362 00:27:47.362 --- 10.0.0.2 ping statistics --- 00:27:47.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.362 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:27:47.362 00:27:47.362 --- 10.0.0.1 ping statistics --- 00:27:47.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.362 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=4049897 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 4049897 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4049897 ']' 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.362 [2024-11-19 10:55:36.519777] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:47.362 [2024-11-19 10:55:36.519819] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.362 [2024-11-19 10:55:36.599128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.362 [2024-11-19 10:55:36.639437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.362 [2024-11-19 10:55:36.639472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.362 [2024-11-19 10:55:36.639479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.362 [2024-11-19 10:55:36.639485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.362 [2024-11-19 10:55:36.639490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.362 [2024-11-19 10:55:36.640009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.362 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.363 [2024-11-19 10:55:36.781321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.363 [2024-11-19 10:55:36.789485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:47.363 null0 00:27:47.363 [2024-11-19 10:55:36.821473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4049934 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4049934 /tmp/host.sock 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4049934 ']' 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:47.363 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.363 10:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.363 [2024-11-19 10:55:36.891278] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:47.363 [2024-11-19 10:55:36.891320] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049934 ] 00:27:47.363 [2024-11-19 10:55:36.963866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.363 [2024-11-19 10:55:37.006311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.363 10:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.741 [2024-11-19 10:55:38.178358] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:48.741 [2024-11-19 10:55:38.178376] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:48.741 [2024-11-19 10:55:38.178394] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:48.741 [2024-11-19 10:55:38.264705] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:48.741 [2024-11-19 10:55:38.439591] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:48.741 [2024-11-19 10:55:38.440341] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6b19f0:1 started. 00:27:48.741 [2024-11-19 10:55:38.441652] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:48.741 [2024-11-19 10:55:38.441689] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:48.741 [2024-11-19 10:55:38.441707] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:48.741 [2024-11-19 10:55:38.441722] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:48.741 [2024-11-19 10:55:38.441739] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.741 [2024-11-19 10:55:38.487749] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6b19f0 was disconnected and freed. delete nvme_qpair. 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:48.741 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:49.000 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:49.000 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:49.000 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.000 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:49.000 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.000 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:49.000 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.000 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:49.000 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.000 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:49.000 10:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:49.943 10:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:49.943 10:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.943 10:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:49.943 10:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.943 10:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:49.943 10:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.943 10:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:49.943 10:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.943 10:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:49.943 10:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:51.320 10:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:51.320 10:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.320 10:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:51.320 10:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.320 10:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:51.320 10:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.320 10:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:51.320 10:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.320 10:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:51.320 10:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:52.256 10:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:52.256 10:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.256 10:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:52.256 10:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.256 10:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:52.256 10:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.256 10:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:52.256 10:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.256 10:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:52.256 10:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:53.191 10:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:53.191 10:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.191 10:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:53.191 10:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.191 10:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:53.191 10:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:53.191 10:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:53.191 10:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.191 10:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:53.191 10:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:54.127 10:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:54.127 10:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.127 10:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:54.127 10:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.127 10:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:54.127 10:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:54.127 10:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:54.127 [2024-11-19 10:55:43.883264] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:54.127 [2024-11-19 10:55:43.883305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.127 [2024-11-19 10:55:43.883331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.127 [2024-11-19 10:55:43.883341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.127 [2024-11-19 10:55:43.883349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.127 [2024-11-19 10:55:43.883356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.127 [2024-11-19 10:55:43.883363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.127 [2024-11-19 10:55:43.883370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.127 [2024-11-19 10:55:43.883377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.127 [2024-11-19 10:55:43.883384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.127 [2024-11-19 10:55:43.883390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.127 [2024-11-19 10:55:43.883397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e220 is same with the state(6) to be set 00:27:54.127 10:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.127 [2024-11-19 10:55:43.893287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68e220 (9): Bad file descriptor 00:27:54.127 [2024-11-19 10:55:43.903322] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:54.127 [2024-11-19 10:55:43.903332] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:54.127 [2024-11-19 10:55:43.903337] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:54.127 [2024-11-19 10:55:43.903341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:54.127 [2024-11-19 10:55:43.903361] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:54.127 10:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:54.127 10:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:55.504 10:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:55.504 10:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:55.504 10:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:55.504 10:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.504 10:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:55.504 10:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.504 10:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:55.504 [2024-11-19 10:55:44.967296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:55.504 [2024-11-19 10:55:44.967375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68e220 with addr=10.0.0.2, port=4420 00:27:55.504 [2024-11-19 10:55:44.967408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e220 is same with the state(6) to be set 00:27:55.504 [2024-11-19 10:55:44.967459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68e220 (9): Bad file descriptor 00:27:55.504 [2024-11-19 10:55:44.968401] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:55.504 [2024-11-19 10:55:44.968463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:55.504 [2024-11-19 10:55:44.968486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:55.504 [2024-11-19 10:55:44.968509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:55.504 [2024-11-19 10:55:44.968530] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:55.504 [2024-11-19 10:55:44.968545] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:55.504 [2024-11-19 10:55:44.968559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:55.504 [2024-11-19 10:55:44.968580] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:55.504 [2024-11-19 10:55:44.968594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:55.504 10:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.504 10:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:55.504 10:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:56.442 [2024-11-19 10:55:45.971108] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:56.442 [2024-11-19 10:55:45.971127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:56.442 [2024-11-19 10:55:45.971137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:56.442 [2024-11-19 10:55:45.971144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:56.442 [2024-11-19 10:55:45.971150] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:56.442 [2024-11-19 10:55:45.971156] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:56.442 [2024-11-19 10:55:45.971160] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:56.442 [2024-11-19 10:55:45.971164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:56.442 [2024-11-19 10:55:45.971183] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:56.442 [2024-11-19 10:55:45.971205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.442 [2024-11-19 10:55:45.971214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.442 [2024-11-19 10:55:45.971223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.442 [2024-11-19 10:55:45.971229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.442 [2024-11-19 10:55:45.971241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.442 [2024-11-19 10:55:45.971247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.442 [2024-11-19 10:55:45.971254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.442 [2024-11-19 10:55:45.971261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.442 [2024-11-19 10:55:45.971268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.442 [2024-11-19 10:55:45.971274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.442 [2024-11-19 10:55:45.971280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:56.442 [2024-11-19 10:55:45.971803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67d900 (9): Bad file descriptor 00:27:56.442 [2024-11-19 10:55:45.972814] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:56.442 [2024-11-19 10:55:45.972824] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:56.442 10:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:56.442 10:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:56.442 10:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:56.442 10:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.442 10:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:56.442 10:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.442 10:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:56.442 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.442 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:56.442 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.442 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.442 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:56.442 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:56.442 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:56.442 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:56.442 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.443 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:56.443 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.443 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:56.443 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.443 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:56.443 10:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:57.818 10:55:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:57.818 10:55:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.818 10:55:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:57.818 10:55:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.818 10:55:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:57.818 10:55:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.818 10:55:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:57.818 10:55:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.818 10:55:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:57.818 10:55:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:58.385 [2024-11-19 10:55:48.023684] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:58.385 [2024-11-19 10:55:48.023702] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:58.385 [2024-11-19 10:55:48.023713] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:58.385 [2024-11-19 10:55:48.150103] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:58.644 10:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:58.644 10:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.644 10:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:58.644 10:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.644 10:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:58.644 10:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:58.644 10:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:58.644 10:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.644 10:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:58.644 10:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:58.644 [2024-11-19 10:55:48.284958] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:58.644 [2024-11-19 10:55:48.285501] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x688fd0:1 started. 00:27:58.644 [2024-11-19 10:55:48.286532] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:58.644 [2024-11-19 10:55:48.286562] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:58.644 [2024-11-19 10:55:48.286579] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:58.644 [2024-11-19 10:55:48.286591] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:58.644 [2024-11-19 10:55:48.286597] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:58.644 [2024-11-19 10:55:48.292009] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x688fd0 was disconnected and freed. delete nvme_qpair. 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4049934 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4049934 ']' 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4049934 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4049934 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:59.580 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4049934' 00:27:59.838 killing process with pid 4049934 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4049934 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4049934 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:59.838 rmmod nvme_tcp 00:27:59.838 rmmod nvme_fabrics 00:27:59.838 rmmod nvme_keyring 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 4049897 ']' 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 4049897 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4049897 ']' 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4049897 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:59.838 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4049897 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4049897' 00:28:00.098 killing process with pid 4049897 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4049897 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4049897 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.098 10:55:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.636 10:55:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:02.636 00:28:02.636 real 0m21.565s 00:28:02.636 user 0m26.838s 00:28:02.636 sys 0m5.876s 00:28:02.636 10:55:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:02.636 10:55:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:02.636 ************************************ 00:28:02.636 END TEST nvmf_discovery_remove_ifc 00:28:02.636 ************************************ 00:28:02.636 10:55:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:02.636 10:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:02.636 10:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:02.636 10:55:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.636 ************************************ 00:28:02.636 START TEST nvmf_identify_kernel_target 00:28:02.636 ************************************ 00:28:02.636 10:55:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:02.636 * Looking for test storage... 00:28:02.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.636 --rc genhtml_branch_coverage=1 00:28:02.636 --rc genhtml_function_coverage=1 00:28:02.636 --rc genhtml_legend=1 00:28:02.636 --rc geninfo_all_blocks=1 00:28:02.636 --rc geninfo_unexecuted_blocks=1 00:28:02.636 00:28:02.636 ' 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.636 --rc genhtml_branch_coverage=1 00:28:02.636 --rc genhtml_function_coverage=1 00:28:02.636 --rc genhtml_legend=1 00:28:02.636 --rc geninfo_all_blocks=1 00:28:02.636 --rc geninfo_unexecuted_blocks=1 00:28:02.636 00:28:02.636 ' 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.636 --rc genhtml_branch_coverage=1 00:28:02.636 --rc genhtml_function_coverage=1 00:28:02.636 --rc genhtml_legend=1 00:28:02.636 --rc geninfo_all_blocks=1 00:28:02.636 --rc geninfo_unexecuted_blocks=1 00:28:02.636 00:28:02.636 ' 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.636 --rc genhtml_branch_coverage=1 00:28:02.636 --rc genhtml_function_coverage=1 00:28:02.636 --rc genhtml_legend=1 00:28:02.636 --rc geninfo_all_blocks=1 00:28:02.636 --rc geninfo_unexecuted_blocks=1 00:28:02.636 00:28:02.636 ' 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.636 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:02.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:02.637 10:55:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:09.207 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.207 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:09.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:09.208 Found net devices under 0000:86:00.0: cvl_0_0 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:09.208 Found net devices under 0000:86:00.1: cvl_0_1 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:09.208 10:55:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:09.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:28:09.208 00:28:09.208 --- 10.0.0.2 ping statistics --- 00:28:09.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.208 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:28:09.208 00:28:09.208 --- 10.0.0.1 ping statistics --- 00:28:09.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.208 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:09.208 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:09.209 10:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:11.113 Waiting for block devices as requested 00:28:11.113 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:11.372 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:11.372 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:11.631 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:11.631 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:11.631 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:11.631 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:11.890 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:11.890 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:11.890 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:12.150 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:12.150 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:12.150 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:12.150 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:12.409 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:12.409 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:12.409 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:12.668 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:12.668 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:12.668 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:12.668 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:12.668 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:12.668 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:12.669 No valid GPT data, bailing 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:12.669 00:28:12.669 Discovery Log Number of Records 2, Generation counter 2 00:28:12.669 =====Discovery Log Entry 0====== 00:28:12.669 trtype: tcp 00:28:12.669 adrfam: ipv4 00:28:12.669 subtype: current discovery subsystem 00:28:12.669 treq: not specified, sq flow control disable supported 00:28:12.669 portid: 1 00:28:12.669 trsvcid: 4420 00:28:12.669 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:12.669 traddr: 10.0.0.1 00:28:12.669 eflags: none 00:28:12.669 sectype: none 00:28:12.669 =====Discovery Log Entry 1====== 00:28:12.669 trtype: tcp 00:28:12.669 adrfam: ipv4 00:28:12.669 subtype: nvme subsystem 00:28:12.669 treq: not specified, sq flow control disable supported 00:28:12.669 portid: 1 00:28:12.669 trsvcid: 4420 00:28:12.669 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:12.669 traddr: 10.0.0.1 00:28:12.669 eflags: none 00:28:12.669 sectype: none 00:28:12.669 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:12.669 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:12.929 ===================================================== 00:28:12.929 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:12.929 ===================================================== 00:28:12.929 Controller Capabilities/Features 00:28:12.929 ================================ 00:28:12.929 Vendor ID: 0000 00:28:12.929 Subsystem Vendor ID: 0000 00:28:12.929 Serial Number: b6fc8e2a494e2619f305 00:28:12.929 Model Number: Linux 00:28:12.929 Firmware Version: 6.8.9-20 00:28:12.929 Recommended Arb Burst: 0 00:28:12.929 IEEE OUI Identifier: 00 00 00 00:28:12.929 Multi-path I/O 00:28:12.929 May have multiple subsystem ports: No 00:28:12.929 May have multiple controllers: No 00:28:12.929 Associated with SR-IOV VF: No 00:28:12.929 Max Data Transfer Size: Unlimited 00:28:12.929 Max Number of Namespaces: 0 00:28:12.929 Max Number of I/O Queues: 1024 00:28:12.929 NVMe Specification Version (VS): 1.3 00:28:12.929 NVMe Specification Version (Identify): 1.3 00:28:12.929 Maximum Queue Entries: 1024 00:28:12.929 Contiguous Queues Required: No 00:28:12.929 Arbitration Mechanisms Supported 00:28:12.929 Weighted Round Robin: Not Supported 00:28:12.929 Vendor Specific: Not Supported 00:28:12.929 Reset Timeout: 7500 ms 00:28:12.929 Doorbell Stride: 4 bytes 00:28:12.929 NVM Subsystem Reset: Not Supported 00:28:12.929 Command Sets Supported 00:28:12.929 NVM Command Set: Supported 00:28:12.929 Boot Partition: Not Supported 00:28:12.929 Memory Page Size Minimum: 4096 bytes 00:28:12.929 Memory Page Size Maximum: 4096 bytes 00:28:12.929 Persistent Memory Region: Not Supported 00:28:12.929 Optional Asynchronous Events Supported 00:28:12.929 Namespace Attribute Notices: Not Supported 00:28:12.929 Firmware Activation Notices: Not Supported 00:28:12.929 ANA Change Notices: Not Supported 00:28:12.929 PLE Aggregate Log Change Notices: Not Supported 00:28:12.929 LBA Status Info Alert Notices: Not Supported 00:28:12.929 EGE Aggregate Log Change Notices: Not Supported 00:28:12.929 Normal NVM Subsystem Shutdown event: Not Supported 00:28:12.929 Zone Descriptor Change Notices: Not Supported 00:28:12.929 Discovery Log Change Notices: Supported 00:28:12.929 Controller Attributes 00:28:12.929 128-bit Host Identifier: Not Supported 00:28:12.929 Non-Operational Permissive Mode: Not Supported 00:28:12.929 NVM Sets: Not Supported 00:28:12.929 Read Recovery Levels: Not Supported 00:28:12.929 Endurance Groups: Not Supported 00:28:12.929 Predictable Latency Mode: Not Supported 00:28:12.929 Traffic Based Keep ALive: Not Supported 00:28:12.929 Namespace Granularity: Not Supported 00:28:12.929 SQ Associations: Not Supported 00:28:12.929 UUID List: Not Supported 00:28:12.929 Multi-Domain Subsystem: Not Supported 00:28:12.929 Fixed Capacity Management: Not Supported 00:28:12.929 Variable Capacity Management: Not Supported 00:28:12.929 Delete Endurance Group: Not Supported 00:28:12.929 Delete NVM Set: Not Supported 00:28:12.929 Extended LBA Formats Supported: Not Supported 00:28:12.929 Flexible Data Placement Supported: Not Supported 00:28:12.929 00:28:12.929 Controller Memory Buffer Support 00:28:12.929 ================================ 00:28:12.929 Supported: No 00:28:12.929 00:28:12.929 Persistent Memory Region Support 00:28:12.929 ================================ 00:28:12.929 Supported: No 00:28:12.929 00:28:12.929 Admin Command Set Attributes 00:28:12.929 ============================ 00:28:12.930 Security Send/Receive: Not Supported 00:28:12.930 Format NVM: Not Supported 00:28:12.930 Firmware Activate/Download: Not Supported 00:28:12.930 Namespace Management: Not Supported 00:28:12.930 Device Self-Test: Not Supported 00:28:12.930 Directives: Not Supported 00:28:12.930 NVMe-MI: Not Supported 00:28:12.930 Virtualization Management: Not Supported 00:28:12.930 Doorbell Buffer Config: Not Supported 00:28:12.930 Get LBA Status Capability: Not Supported 00:28:12.930 Command & Feature Lockdown Capability: Not Supported 00:28:12.930 Abort Command Limit: 1 00:28:12.930 Async Event Request Limit: 1 00:28:12.930 Number of Firmware Slots: N/A 00:28:12.930 Firmware Slot 1 Read-Only: N/A 00:28:12.930 Firmware Activation Without Reset: N/A 00:28:12.930 Multiple Update Detection Support: N/A 00:28:12.930 Firmware Update Granularity: No Information Provided 00:28:12.930 Per-Namespace SMART Log: No 00:28:12.930 Asymmetric Namespace Access Log Page: Not Supported 00:28:12.930 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:12.930 Command Effects Log Page: Not Supported 00:28:12.930 Get Log Page Extended Data: Supported 00:28:12.930 Telemetry Log Pages: Not Supported 00:28:12.930 Persistent Event Log Pages: Not Supported 00:28:12.930 Supported Log Pages Log Page: May Support 00:28:12.930 Commands Supported & Effects Log Page: Not Supported 00:28:12.930 Feature Identifiers & Effects Log Page:May Support 00:28:12.930 NVMe-MI Commands & Effects Log Page: May Support 00:28:12.930 Data Area 4 for Telemetry Log: Not Supported 00:28:12.930 Error Log Page Entries Supported: 1 00:28:12.930 Keep Alive: Not Supported 00:28:12.930 00:28:12.930 NVM Command Set Attributes 00:28:12.930 ========================== 00:28:12.930 Submission Queue Entry Size 00:28:12.930 Max: 1 00:28:12.930 Min: 1 00:28:12.930 Completion Queue Entry Size 00:28:12.930 Max: 1 00:28:12.930 Min: 1 00:28:12.930 Number of Namespaces: 0 00:28:12.930 Compare Command: Not Supported 00:28:12.930 Write Uncorrectable Command: Not Supported 00:28:12.930 Dataset Management Command: Not Supported 00:28:12.930 Write Zeroes Command: Not Supported 00:28:12.930 Set Features Save Field: Not Supported 00:28:12.930 Reservations: Not Supported 00:28:12.930 Timestamp: Not Supported 00:28:12.930 Copy: Not Supported 00:28:12.930 Volatile Write Cache: Not Present 00:28:12.930 Atomic Write Unit (Normal): 1 00:28:12.930 Atomic Write Unit (PFail): 1 00:28:12.930 Atomic Compare & Write Unit: 1 00:28:12.930 Fused Compare & Write: Not Supported 00:28:12.930 Scatter-Gather List 00:28:12.930 SGL Command Set: Supported 00:28:12.930 SGL Keyed: Not Supported 00:28:12.930 SGL Bit Bucket Descriptor: Not Supported 00:28:12.930 SGL Metadata Pointer: Not Supported 00:28:12.930 Oversized SGL: Not Supported 00:28:12.930 SGL Metadata Address: Not Supported 00:28:12.930 SGL Offset: Supported 00:28:12.930 Transport SGL Data Block: Not Supported 00:28:12.930 Replay Protected Memory Block: Not Supported 00:28:12.930 00:28:12.930 Firmware Slot Information 00:28:12.930 ========================= 00:28:12.930 Active slot: 0 00:28:12.930 00:28:12.930 00:28:12.930 Error Log 00:28:12.930 ========= 00:28:12.930 00:28:12.930 Active Namespaces 00:28:12.930 ================= 00:28:12.930 Discovery Log Page 00:28:12.930 ================== 00:28:12.930 Generation Counter: 2 00:28:12.930 Number of Records: 2 00:28:12.930 Record Format: 0 00:28:12.930 00:28:12.930 Discovery Log Entry 0 00:28:12.930 ---------------------- 00:28:12.930 Transport Type: 3 (TCP) 00:28:12.930 Address Family: 1 (IPv4) 00:28:12.930 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:12.930 Entry Flags: 00:28:12.930 Duplicate Returned Information: 0 00:28:12.930 Explicit Persistent Connection Support for Discovery: 0 00:28:12.930 Transport Requirements: 00:28:12.930 Secure Channel: Not Specified 00:28:12.930 Port ID: 1 (0x0001) 00:28:12.930 Controller ID: 65535 (0xffff) 00:28:12.930 Admin Max SQ Size: 32 00:28:12.930 Transport Service Identifier: 4420 00:28:12.930 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:12.930 Transport Address: 10.0.0.1 00:28:12.930 Discovery Log Entry 1 00:28:12.930 ---------------------- 00:28:12.930 Transport Type: 3 (TCP) 00:28:12.930 Address Family: 1 (IPv4) 00:28:12.930 Subsystem Type: 2 (NVM Subsystem) 00:28:12.930 Entry Flags: 00:28:12.930 Duplicate Returned Information: 0 00:28:12.930 Explicit Persistent Connection Support for Discovery: 0 00:28:12.930 Transport Requirements: 00:28:12.930 Secure Channel: Not Specified 00:28:12.930 Port ID: 1 (0x0001) 00:28:12.930 Controller ID: 65535 (0xffff) 00:28:12.930 Admin Max SQ Size: 32 00:28:12.930 Transport Service Identifier: 4420 00:28:12.930 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:12.930 Transport Address: 10.0.0.1 00:28:12.930 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:12.930 get_feature(0x01) failed 00:28:12.930 get_feature(0x02) failed 00:28:12.930 get_feature(0x04) failed 00:28:12.930 ===================================================== 00:28:12.930 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:12.930 ===================================================== 00:28:12.930 Controller Capabilities/Features 00:28:12.930 ================================ 00:28:12.931 Vendor ID: 0000 00:28:12.931 Subsystem Vendor ID: 0000 00:28:12.931 Serial Number: 4bbda5400860a5a03b16 00:28:12.931 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:12.931 Firmware Version: 6.8.9-20 00:28:12.931 Recommended Arb Burst: 6 00:28:12.931 IEEE OUI Identifier: 00 00 00 00:28:12.931 Multi-path I/O 00:28:12.931 May have multiple subsystem ports: Yes 00:28:12.931 May have multiple controllers: Yes 00:28:12.931 Associated with SR-IOV VF: No 00:28:12.931 Max Data Transfer Size: Unlimited 00:28:12.931 Max Number of Namespaces: 1024 00:28:12.931 Max Number of I/O Queues: 128 00:28:12.931 NVMe Specification Version (VS): 1.3 00:28:12.931 NVMe Specification Version (Identify): 1.3 00:28:12.931 Maximum Queue Entries: 1024 00:28:12.931 Contiguous Queues Required: No 00:28:12.931 Arbitration Mechanisms Supported 00:28:12.931 Weighted Round Robin: Not Supported 00:28:12.931 Vendor Specific: Not Supported 00:28:12.931 Reset Timeout: 7500 ms 00:28:12.931 Doorbell Stride: 4 bytes 00:28:12.931 NVM Subsystem Reset: Not Supported 00:28:12.931 Command Sets Supported 00:28:12.931 NVM Command Set: Supported 00:28:12.931 Boot Partition: Not Supported 00:28:12.931 Memory Page Size Minimum: 4096 bytes 00:28:12.931 Memory Page Size Maximum: 4096 bytes 00:28:12.931 Persistent Memory Region: Not Supported 00:28:12.931 Optional Asynchronous Events Supported 00:28:12.931 Namespace Attribute Notices: Supported 00:28:12.931 Firmware Activation Notices: Not Supported 00:28:12.931 ANA Change Notices: Supported 00:28:12.931 PLE Aggregate Log Change Notices: Not Supported 00:28:12.931 LBA Status Info Alert Notices: Not Supported 00:28:12.931 EGE Aggregate Log Change Notices: Not Supported 00:28:12.931 Normal NVM Subsystem Shutdown event: Not Supported 00:28:12.931 Zone Descriptor Change Notices: Not Supported 00:28:12.931 Discovery Log Change Notices: Not Supported 00:28:12.931 Controller Attributes 00:28:12.931 128-bit Host Identifier: Supported 00:28:12.931 Non-Operational Permissive Mode: Not Supported 00:28:12.931 NVM Sets: Not Supported 00:28:12.931 Read Recovery Levels: Not Supported 00:28:12.931 Endurance Groups: Not Supported 00:28:12.931 Predictable Latency Mode: Not Supported 00:28:12.931 Traffic Based Keep ALive: Supported 00:28:12.931 Namespace Granularity: Not Supported 00:28:12.931 SQ Associations: Not Supported 00:28:12.931 UUID List: Not Supported 00:28:12.931 Multi-Domain Subsystem: Not Supported 00:28:12.931 Fixed Capacity Management: Not Supported 00:28:12.931 Variable Capacity Management: Not Supported 00:28:12.931 Delete Endurance Group: Not Supported 00:28:12.931 Delete NVM Set: Not Supported 00:28:12.931 Extended LBA Formats Supported: Not Supported 00:28:12.931 Flexible Data Placement Supported: Not Supported 00:28:12.931 00:28:12.931 Controller Memory Buffer Support 00:28:12.931 ================================ 00:28:12.931 Supported: No 00:28:12.931 00:28:12.931 Persistent Memory Region Support 00:28:12.931 ================================ 00:28:12.931 Supported: No 00:28:12.931 00:28:12.931 Admin Command Set Attributes 00:28:12.931 ============================ 00:28:12.931 Security Send/Receive: Not Supported 00:28:12.931 Format NVM: Not Supported 00:28:12.931 Firmware Activate/Download: Not Supported 00:28:12.931 Namespace Management: Not Supported 00:28:12.931 Device Self-Test: Not Supported 00:28:12.931 Directives: Not Supported 00:28:12.931 NVMe-MI: Not Supported 00:28:12.931 Virtualization Management: Not Supported 00:28:12.931 Doorbell Buffer Config: Not Supported 00:28:12.931 Get LBA Status Capability: Not Supported 00:28:12.931 Command & Feature Lockdown Capability: Not Supported 00:28:12.931 Abort Command Limit: 4 00:28:12.931 Async Event Request Limit: 4 00:28:12.931 Number of Firmware Slots: N/A 00:28:12.931 Firmware Slot 1 Read-Only: N/A 00:28:12.931 Firmware Activation Without Reset: N/A 00:28:12.931 Multiple Update Detection Support: N/A 00:28:12.931 Firmware Update Granularity: No Information Provided 00:28:12.931 Per-Namespace SMART Log: Yes 00:28:12.931 Asymmetric Namespace Access Log Page: Supported 00:28:12.931 ANA Transition Time : 10 sec 00:28:12.931 00:28:12.931 Asymmetric Namespace Access Capabilities 00:28:12.931 ANA Optimized State : Supported 00:28:12.931 ANA Non-Optimized State : Supported 00:28:12.931 ANA Inaccessible State : Supported 00:28:12.931 ANA Persistent Loss State : Supported 00:28:12.931 ANA Change State : Supported 00:28:12.931 ANAGRPID is not changed : No 00:28:12.931 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:12.931 00:28:12.931 ANA Group Identifier Maximum : 128 00:28:12.931 Number of ANA Group Identifiers : 128 00:28:12.931 Max Number of Allowed Namespaces : 1024 00:28:12.931 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:12.931 Command Effects Log Page: Supported 00:28:12.931 Get Log Page Extended Data: Supported 00:28:12.931 Telemetry Log Pages: Not Supported 00:28:12.931 Persistent Event Log Pages: Not Supported 00:28:12.931 Supported Log Pages Log Page: May Support 00:28:12.931 Commands Supported & Effects Log Page: Not Supported 00:28:12.931 Feature Identifiers & Effects Log Page:May Support 00:28:12.931 NVMe-MI Commands & Effects Log Page: May Support 00:28:12.931 Data Area 4 for Telemetry Log: Not Supported 00:28:12.931 Error Log Page Entries Supported: 128 00:28:12.931 Keep Alive: Supported 00:28:12.931 Keep Alive Granularity: 1000 ms 00:28:12.931 00:28:12.931 NVM Command Set Attributes 00:28:12.931 ========================== 00:28:12.932 Submission Queue Entry Size 00:28:12.932 Max: 64 00:28:12.932 Min: 64 00:28:12.932 Completion Queue Entry Size 00:28:12.932 Max: 16 00:28:12.932 Min: 16 00:28:12.932 Number of Namespaces: 1024 00:28:12.932 Compare Command: Not Supported 00:28:12.932 Write Uncorrectable Command: Not Supported 00:28:12.932 Dataset Management Command: Supported 00:28:12.932 Write Zeroes Command: Supported 00:28:12.932 Set Features Save Field: Not Supported 00:28:12.932 Reservations: Not Supported 00:28:12.932 Timestamp: Not Supported 00:28:12.932 Copy: Not Supported 00:28:12.932 Volatile Write Cache: Present 00:28:12.932 Atomic Write Unit (Normal): 1 00:28:12.932 Atomic Write Unit (PFail): 1 00:28:12.932 Atomic Compare & Write Unit: 1 00:28:12.932 Fused Compare & Write: Not Supported 00:28:12.932 Scatter-Gather List 00:28:12.932 SGL Command Set: Supported 00:28:12.932 SGL Keyed: Not Supported 00:28:12.932 SGL Bit Bucket Descriptor: Not Supported 00:28:12.932 SGL Metadata Pointer: Not Supported 00:28:12.932 Oversized SGL: Not Supported 00:28:12.932 SGL Metadata Address: Not Supported 00:28:12.932 SGL Offset: Supported 00:28:12.932 Transport SGL Data Block: Not Supported 00:28:12.932 Replay Protected Memory Block: Not Supported 00:28:12.932 00:28:12.932 Firmware Slot Information 00:28:12.932 ========================= 00:28:12.932 Active slot: 0 00:28:12.932 00:28:12.932 Asymmetric Namespace Access 00:28:12.932 =========================== 00:28:12.932 Change Count : 0 00:28:12.932 Number of ANA Group Descriptors : 1 00:28:12.932 ANA Group Descriptor : 0 00:28:12.932 ANA Group ID : 1 00:28:12.932 Number of NSID Values : 1 00:28:12.932 Change Count : 0 00:28:12.932 ANA State : 1 00:28:12.932 Namespace Identifier : 1 00:28:12.932 00:28:12.932 Commands Supported and Effects 00:28:12.932 ============================== 00:28:12.932 Admin Commands 00:28:12.932 -------------- 00:28:12.932 Get Log Page (02h): Supported 00:28:12.932 Identify (06h): Supported 00:28:12.932 Abort (08h): Supported 00:28:12.932 Set Features (09h): Supported 00:28:12.932 Get Features (0Ah): Supported 00:28:12.932 Asynchronous Event Request (0Ch): Supported 00:28:12.932 Keep Alive (18h): Supported 00:28:12.932 I/O Commands 00:28:12.932 ------------ 00:28:12.932 Flush (00h): Supported 00:28:12.932 Write (01h): Supported LBA-Change 00:28:12.932 Read (02h): Supported 00:28:12.932 Write Zeroes (08h): Supported LBA-Change 00:28:12.932 Dataset Management (09h): Supported 00:28:12.932 00:28:12.932 Error Log 00:28:12.932 ========= 00:28:12.932 Entry: 0 00:28:12.932 Error Count: 0x3 00:28:12.932 Submission Queue Id: 0x0 00:28:12.932 Command Id: 0x5 00:28:12.932 Phase Bit: 0 00:28:12.932 Status Code: 0x2 00:28:12.932 Status Code Type: 0x0 00:28:12.932 Do Not Retry: 1 00:28:12.932 Error Location: 0x28 00:28:12.932 LBA: 0x0 00:28:12.932 Namespace: 0x0 00:28:12.932 Vendor Log Page: 0x0 00:28:12.932 ----------- 00:28:12.932 Entry: 1 00:28:12.932 Error Count: 0x2 00:28:12.932 Submission Queue Id: 0x0 00:28:12.932 Command Id: 0x5 00:28:12.932 Phase Bit: 0 00:28:12.932 Status Code: 0x2 00:28:12.932 Status Code Type: 0x0 00:28:12.932 Do Not Retry: 1 00:28:12.932 Error Location: 0x28 00:28:12.932 LBA: 0x0 00:28:12.932 Namespace: 0x0 00:28:12.932 Vendor Log Page: 0x0 00:28:12.932 ----------- 00:28:12.932 Entry: 2 00:28:12.932 Error Count: 0x1 00:28:12.932 Submission Queue Id: 0x0 00:28:12.932 Command Id: 0x4 00:28:12.932 Phase Bit: 0 00:28:12.932 Status Code: 0x2 00:28:12.932 Status Code Type: 0x0 00:28:12.932 Do Not Retry: 1 00:28:12.932 Error Location: 0x28 00:28:12.932 LBA: 0x0 00:28:12.932 Namespace: 0x0 00:28:12.932 Vendor Log Page: 0x0 00:28:12.932 00:28:12.932 Number of Queues 00:28:12.932 ================ 00:28:12.932 Number of I/O Submission Queues: 128 00:28:12.932 Number of I/O Completion Queues: 128 00:28:12.932 00:28:12.932 ZNS Specific Controller Data 00:28:12.932 ============================ 00:28:12.932 Zone Append Size Limit: 0 00:28:12.932 00:28:12.932 00:28:12.932 Active Namespaces 00:28:12.932 ================= 00:28:12.932 get_feature(0x05) failed 00:28:12.932 Namespace ID:1 00:28:12.932 Command Set Identifier: NVM (00h) 00:28:12.932 Deallocate: Supported 00:28:12.932 Deallocated/Unwritten Error: Not Supported 00:28:12.932 Deallocated Read Value: Unknown 00:28:12.932 Deallocate in Write Zeroes: Not Supported 00:28:12.932 Deallocated Guard Field: 0xFFFF 00:28:12.932 Flush: Supported 00:28:12.932 Reservation: Not Supported 00:28:12.932 Namespace Sharing Capabilities: Multiple Controllers 00:28:12.932 Size (in LBAs): 3125627568 (1490GiB) 00:28:12.932 Capacity (in LBAs): 3125627568 (1490GiB) 00:28:12.932 Utilization (in LBAs): 3125627568 (1490GiB) 00:28:12.932 UUID: 14d4ab56-7834-4949-8f63-f60a5c8d2cfc 00:28:12.932 Thin Provisioning: Not Supported 00:28:12.932 Per-NS Atomic Units: Yes 00:28:12.933 Atomic Boundary Size (Normal): 0 00:28:12.933 Atomic Boundary Size (PFail): 0 00:28:12.933 Atomic Boundary Offset: 0 00:28:12.933 NGUID/EUI64 Never Reused: No 00:28:12.933 ANA group ID: 1 00:28:12.933 Namespace Write Protected: No 00:28:12.933 Number of LBA Formats: 1 00:28:12.933 Current LBA Format: LBA Format #00 00:28:12.933 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:12.933 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:12.933 rmmod nvme_tcp 00:28:12.933 rmmod nvme_fabrics 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.933 10:56:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.469 10:56:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.469 10:56:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:15.469 10:56:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:15.469 10:56:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:15.469 10:56:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:15.469 10:56:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:15.469 10:56:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:15.469 10:56:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:15.469 10:56:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:15.469 10:56:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:15.469 10:56:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:18.086 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:18.086 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:19.469 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:19.733 00:28:19.733 real 0m17.358s 00:28:19.733 user 0m4.323s 00:28:19.733 sys 0m8.811s 00:28:19.733 10:56:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.733 10:56:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.733 ************************************ 00:28:19.733 END TEST nvmf_identify_kernel_target 00:28:19.733 ************************************ 00:28:19.733 10:56:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:19.733 10:56:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:19.733 10:56:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.733 10:56:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.733 ************************************ 00:28:19.733 START TEST nvmf_auth_host 00:28:19.733 ************************************ 00:28:19.733 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:19.733 * Looking for test storage... 00:28:19.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:19.734 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:19.992 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:19.992 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:19.992 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:19.992 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:19.992 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:19.992 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:19.992 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:19.992 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:19.992 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:19.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.993 --rc genhtml_branch_coverage=1 00:28:19.993 --rc genhtml_function_coverage=1 00:28:19.993 --rc genhtml_legend=1 00:28:19.993 --rc geninfo_all_blocks=1 00:28:19.993 --rc geninfo_unexecuted_blocks=1 00:28:19.993 00:28:19.993 ' 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:19.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.993 --rc genhtml_branch_coverage=1 00:28:19.993 --rc genhtml_function_coverage=1 00:28:19.993 --rc genhtml_legend=1 00:28:19.993 --rc geninfo_all_blocks=1 00:28:19.993 --rc geninfo_unexecuted_blocks=1 00:28:19.993 00:28:19.993 ' 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:19.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.993 --rc genhtml_branch_coverage=1 00:28:19.993 --rc genhtml_function_coverage=1 00:28:19.993 --rc genhtml_legend=1 00:28:19.993 --rc geninfo_all_blocks=1 00:28:19.993 --rc geninfo_unexecuted_blocks=1 00:28:19.993 00:28:19.993 ' 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:19.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.993 --rc genhtml_branch_coverage=1 00:28:19.993 --rc genhtml_function_coverage=1 00:28:19.993 --rc genhtml_legend=1 00:28:19.993 --rc geninfo_all_blocks=1 00:28:19.993 --rc geninfo_unexecuted_blocks=1 00:28:19.993 00:28:19.993 ' 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:19.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.993 10:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:26.563 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:26.563 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:26.563 Found net devices under 0000:86:00.0: cvl_0_0 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:26.563 Found net devices under 0000:86:00.1: cvl_0_1 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.563 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:26.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:28:26.564 00:28:26.564 --- 10.0.0.2 ping statistics --- 00:28:26.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.564 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:28:26.564 00:28:26.564 --- 10.0.0.1 ping statistics --- 00:28:26.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.564 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=4062144 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 4062144 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4062144 ']' 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=26fb6b75f97283bea3f4535f56334c22 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6ud 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 26fb6b75f97283bea3f4535f56334c22 0 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 26fb6b75f97283bea3f4535f56334c22 0 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=26fb6b75f97283bea3f4535f56334c22 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6ud 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6ud 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.6ud 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=33cdf4feca4f3f986b1b106eb514eb5374f37729f41c5a28a03ca59e9ce70178 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5MQ 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 33cdf4feca4f3f986b1b106eb514eb5374f37729f41c5a28a03ca59e9ce70178 3 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 33cdf4feca4f3f986b1b106eb514eb5374f37729f41c5a28a03ca59e9ce70178 3 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=33cdf4feca4f3f986b1b106eb514eb5374f37729f41c5a28a03ca59e9ce70178 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5MQ 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5MQ 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.5MQ 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=83b7789b4a0e86b61ee24bb8e0c73d331c9a3b033e121f78 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Og5 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 83b7789b4a0e86b61ee24bb8e0c73d331c9a3b033e121f78 0 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 83b7789b4a0e86b61ee24bb8e0c73d331c9a3b033e121f78 0 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=83b7789b4a0e86b61ee24bb8e0c73d331c9a3b033e121f78 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Og5 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Og5 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Og5 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:26.564 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9c7ba142adb9c491d3480275068df0e2662a02e88fdca019 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7j6 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9c7ba142adb9c491d3480275068df0e2662a02e88fdca019 2 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9c7ba142adb9c491d3480275068df0e2662a02e88fdca019 2 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9c7ba142adb9c491d3480275068df0e2662a02e88fdca019 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:26.565 10:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7j6 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7j6 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.7j6 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=90c47b474563e3524ca11338aded16c4 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qCg 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 90c47b474563e3524ca11338aded16c4 1 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 90c47b474563e3524ca11338aded16c4 1 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=90c47b474563e3524ca11338aded16c4 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qCg 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qCg 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.qCg 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ca2517b608a9ac54f3687ae5a22fe6e3 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mKG 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ca2517b608a9ac54f3687ae5a22fe6e3 1 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ca2517b608a9ac54f3687ae5a22fe6e3 1 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ca2517b608a9ac54f3687ae5a22fe6e3 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mKG 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mKG 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.mKG 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a0e9294f06146e689d855cba602e8c18be7b4f1bb1fa0073 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.GUK 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a0e9294f06146e689d855cba602e8c18be7b4f1bb1fa0073 2 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a0e9294f06146e689d855cba602e8c18be7b4f1bb1fa0073 2 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a0e9294f06146e689d855cba602e8c18be7b4f1bb1fa0073 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.GUK 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.GUK 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.GUK 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=77536fb19b6db0d24de1dfb17b8386d5 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Otr 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 77536fb19b6db0d24de1dfb17b8386d5 0 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 77536fb19b6db0d24de1dfb17b8386d5 0 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=77536fb19b6db0d24de1dfb17b8386d5 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Otr 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Otr 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Otr 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:26.565 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1f7faac3255724fb55dc0e592af19bcbc80f56ab62e68c2c528e2d45ea30b87f 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.fT4 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1f7faac3255724fb55dc0e592af19bcbc80f56ab62e68c2c528e2d45ea30b87f 3 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1f7faac3255724fb55dc0e592af19bcbc80f56ab62e68c2c528e2d45ea30b87f 3 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1f7faac3255724fb55dc0e592af19bcbc80f56ab62e68c2c528e2d45ea30b87f 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.fT4 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.fT4 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.fT4 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4062144 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4062144 ']' 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.566 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6ud 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.5MQ ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5MQ 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Og5 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.7j6 ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7j6 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.qCg 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.mKG ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mKG 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.GUK 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Otr ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Otr 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.fT4 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.825 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.083 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:27.083 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:27.083 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:27.083 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.083 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.083 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:27.084 10:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:29.620 Waiting for block devices as requested 00:28:29.620 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:29.877 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:29.877 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:29.877 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:29.877 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:30.136 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:30.136 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:30.136 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:30.394 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:30.394 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:30.394 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:30.394 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:30.653 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:30.653 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:30.653 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:30.910 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:30.911 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:31.477 No valid GPT data, bailing 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:31.477 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:31.737 00:28:31.737 Discovery Log Number of Records 2, Generation counter 2 00:28:31.737 =====Discovery Log Entry 0====== 00:28:31.737 trtype: tcp 00:28:31.737 adrfam: ipv4 00:28:31.737 subtype: current discovery subsystem 00:28:31.737 treq: not specified, sq flow control disable supported 00:28:31.737 portid: 1 00:28:31.737 trsvcid: 4420 00:28:31.737 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:31.737 traddr: 10.0.0.1 00:28:31.737 eflags: none 00:28:31.737 sectype: none 00:28:31.737 =====Discovery Log Entry 1====== 00:28:31.737 trtype: tcp 00:28:31.737 adrfam: ipv4 00:28:31.737 subtype: nvme subsystem 00:28:31.737 treq: not specified, sq flow control disable supported 00:28:31.737 portid: 1 00:28:31.737 trsvcid: 4420 00:28:31.737 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:31.737 traddr: 10.0.0.1 00:28:31.737 eflags: none 00:28:31.737 sectype: none 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.737 nvme0n1 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.737 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.738 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.997 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.997 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.997 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.997 nvme0n1 00:28:31.997 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.997 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.997 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.997 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.997 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.997 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.997 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.997 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.998 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.256 nvme0n1 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.256 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.257 10:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.516 nvme0n1 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.516 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.775 nvme0n1 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.775 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.776 nvme0n1 00:28:32.776 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.035 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.035 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.035 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.035 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.035 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.035 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.036 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.296 nvme0n1 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.296 10:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.556 nvme0n1 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:33.556 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.557 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 nvme0n1 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:33.816 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.817 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.076 nvme0n1 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.076 nvme0n1 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.076 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.335 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.336 10:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.595 nvme0n1 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.595 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.854 nvme0n1 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:34.854 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.855 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.114 nvme0n1 00:28:35.114 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.114 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.114 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.114 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.114 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.114 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.114 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.114 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.114 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.114 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.373 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.374 10:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.633 nvme0n1 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.633 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.892 nvme0n1 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.892 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.459 nvme0n1 00:28:36.459 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.459 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.459 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.459 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.459 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.459 10:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:36.459 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.460 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.719 nvme0n1 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.719 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.287 nvme0n1 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.287 10:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.546 nvme0n1 00:28:37.546 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.546 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.546 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.546 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.546 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.546 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.805 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.064 nvme0n1 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.064 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.065 10:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.647 nvme0n1 00:28:38.647 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.647 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.647 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.647 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.647 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.647 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:38.905 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.906 10:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.526 nvme0n1 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.526 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.527 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.093 nvme0n1 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.093 10:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.659 nvme0n1 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.659 10:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.226 nvme0n1 00:28:41.226 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.226 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.226 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.226 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.226 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.485 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.486 nvme0n1 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.486 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.745 nvme0n1 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.745 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.004 nvme0n1 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.004 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.263 nvme0n1 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:42.263 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.264 10:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.522 nvme0n1 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.522 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.523 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.523 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.523 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.523 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.523 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.523 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.523 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.523 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.523 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.781 nvme0n1 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.781 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.044 nvme0n1 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.044 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.303 nvme0n1 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.303 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.304 10:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.563 nvme0n1 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.563 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.822 nvme0n1 00:28:43.822 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.822 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.822 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.822 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.822 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.822 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.822 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.823 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.082 nvme0n1 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.082 10:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.341 nvme0n1 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.341 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.600 nvme0n1 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.600 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.860 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.119 nvme0n1 00:28:45.119 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.119 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.119 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.119 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.119 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.119 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.119 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.119 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.119 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.120 10:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.379 nvme0n1 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.379 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.947 nvme0n1 00:28:45.947 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.947 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.947 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.947 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.947 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.947 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.947 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.947 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.947 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.948 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.207 nvme0n1 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.207 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:46.208 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.208 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:46.208 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.208 10:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.466 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.466 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.466 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.466 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.466 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.466 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.467 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.467 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.467 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.467 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.467 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.467 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.467 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:46.467 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.467 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.726 nvme0n1 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.726 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.294 nvme0n1 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.294 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.295 10:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.554 nvme0n1 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.554 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.813 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.382 nvme0n1 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.382 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.383 10:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.950 nvme0n1 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.950 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.951 10:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.519 nvme0n1 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.519 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.085 nvme0n1 00:28:50.085 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.343 10:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.910 nvme0n1 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.910 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.170 nvme0n1 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.170 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.430 nvme0n1 00:28:51.430 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.430 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.430 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.430 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.430 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.430 10:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.430 nvme0n1 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.430 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.689 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.690 nvme0n1 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.690 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.949 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.950 nvme0n1 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.950 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.209 nvme0n1 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.209 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.210 10:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.468 nvme0n1 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.468 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.469 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.727 nvme0n1 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:52.727 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.728 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.986 nvme0n1 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.987 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.246 nvme0n1 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.246 10:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.246 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.505 nvme0n1 00:28:53.505 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.505 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.505 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.505 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.505 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.505 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.764 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.023 nvme0n1 00:28:54.023 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.023 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.023 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.023 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.023 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.023 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.023 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.024 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.283 nvme0n1 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.283 10:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.283 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.283 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.283 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.283 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.283 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.283 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.283 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:54.283 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.283 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.543 nvme0n1 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.543 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.802 nvme0n1 00:28:54.802 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.802 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.802 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.802 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.802 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.802 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.802 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.802 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.802 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.061 10:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.321 nvme0n1 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.321 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.889 nvme0n1 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:55.889 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.890 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.148 nvme0n1 00:28:56.148 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.148 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.148 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.148 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.149 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.149 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.408 10:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.667 nvme0n1 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.667 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.668 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.236 nvme0n1 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmYjZiNzVmOTcyODNiZWEzZjQ1MzVmNTYzMzRjMjLHv9Nw: 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: ]] 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNjZGY0ZmVjYTRmM2Y5ODZiMWIxMDZlYjUxNGViNTM3NGYzNzcyOWY0MWM1YTI4YTAzY2E1OWU5Y2U3MDE3OKzhiNc=: 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.236 10:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.804 nvme0n1 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.804 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.805 10:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.387 nvme0n1 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:58.387 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.646 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.647 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.216 nvme0n1 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBlOTI5NGYwNjE0NmU2ODlkODU1Y2JhNjAyZThjMThiZTdiNGYxYmIxZmEwMDczrCrNKQ==: 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: ]] 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzc1MzZmYjE5YjZkYjBkMjRkZTFkZmIxN2I4Mzg2ZDXQHP8j: 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.216 10:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.784 nvme0n1 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY3ZmFhYzMyNTU3MjRmYjU1ZGMwZTU5MmFmMTliY2JjODBmNTZhYjYyZTY4YzJjNTI4ZTJkNDVlYTMwYjg3ZsR5m0M=: 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:59.784 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.785 10:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.451 nvme0n1 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.451 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.451 request: 00:29:00.452 { 00:29:00.452 "name": "nvme0", 00:29:00.452 "trtype": "tcp", 00:29:00.452 "traddr": "10.0.0.1", 00:29:00.452 "adrfam": "ipv4", 00:29:00.452 "trsvcid": "4420", 00:29:00.452 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:00.452 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:00.452 "prchk_reftag": false, 00:29:00.452 "prchk_guard": false, 00:29:00.452 "hdgst": false, 00:29:00.452 "ddgst": false, 00:29:00.452 "allow_unrecognized_csi": false, 00:29:00.452 "method": "bdev_nvme_attach_controller", 00:29:00.452 "req_id": 1 00:29:00.452 } 00:29:00.452 Got JSON-RPC error response 00:29:00.452 response: 00:29:00.452 { 00:29:00.452 "code": -5, 00:29:00.452 "message": "Input/output error" 00:29:00.452 } 00:29:00.452 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.452 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:00.452 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.452 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.452 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.452 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.452 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:00.452 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.452 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.452 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.713 request: 00:29:00.713 { 00:29:00.713 "name": "nvme0", 00:29:00.713 "trtype": "tcp", 00:29:00.713 "traddr": "10.0.0.1", 00:29:00.713 "adrfam": "ipv4", 00:29:00.713 "trsvcid": "4420", 00:29:00.713 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:00.713 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:00.713 "prchk_reftag": false, 00:29:00.713 "prchk_guard": false, 00:29:00.713 "hdgst": false, 00:29:00.713 "ddgst": false, 00:29:00.713 "dhchap_key": "key2", 00:29:00.713 "allow_unrecognized_csi": false, 00:29:00.713 "method": "bdev_nvme_attach_controller", 00:29:00.713 "req_id": 1 00:29:00.713 } 00:29:00.713 Got JSON-RPC error response 00:29:00.713 response: 00:29:00.713 { 00:29:00.713 "code": -5, 00:29:00.713 "message": "Input/output error" 00:29:00.713 } 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.713 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.714 request: 00:29:00.714 { 00:29:00.714 "name": "nvme0", 00:29:00.714 "trtype": "tcp", 00:29:00.714 "traddr": "10.0.0.1", 00:29:00.714 "adrfam": "ipv4", 00:29:00.714 "trsvcid": "4420", 00:29:00.714 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:00.714 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:00.714 "prchk_reftag": false, 00:29:00.714 "prchk_guard": false, 00:29:00.714 "hdgst": false, 00:29:00.714 "ddgst": false, 00:29:00.714 "dhchap_key": "key1", 00:29:00.714 "dhchap_ctrlr_key": "ckey2", 00:29:00.714 "allow_unrecognized_csi": false, 00:29:00.714 "method": "bdev_nvme_attach_controller", 00:29:00.714 "req_id": 1 00:29:00.714 } 00:29:00.714 Got JSON-RPC error response 00:29:00.714 response: 00:29:00.714 { 00:29:00.714 "code": -5, 00:29:00.714 "message": "Input/output error" 00:29:00.714 } 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.714 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.972 nvme0n1 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.972 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.231 request: 00:29:01.231 { 00:29:01.231 "name": "nvme0", 00:29:01.231 "dhchap_key": "key1", 00:29:01.231 "dhchap_ctrlr_key": "ckey2", 00:29:01.231 "method": "bdev_nvme_set_keys", 00:29:01.231 "req_id": 1 00:29:01.231 } 00:29:01.231 Got JSON-RPC error response 00:29:01.231 response: 00:29:01.231 { 00:29:01.231 "code": -13, 00:29:01.231 "message": "Permission denied" 00:29:01.231 } 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:01.231 10:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:02.169 10:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.169 10:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.169 10:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:02.169 10:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.169 10:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.169 10:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:02.169 10:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:03.106 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.106 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:03.106 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.106 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.106 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiNzc4OWI0YTBlODZiNjFlZTI0YmI4ZTBjNzNkMzMxYzlhM2IwMzNlMTIxZjc4icHFcQ==: 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: ]] 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM3YmExNDJhZGI5YzQ5MWQzNDgwMjc1MDY4ZGYwZTI2NjJhMDJlODhmZGNhMDE5fiopxw==: 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.366 10:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.366 nvme0n1 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTBjNDdiNDc0NTYzZTM1MjRjYTExMzM4YWRlZDE2YzSoPd0V: 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: ]] 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyNTE3YjYwOGE5YWM1NGYzNjg3YWU1YTIyZmU2ZTMBnlPX: 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.366 request: 00:29:03.366 { 00:29:03.366 "name": "nvme0", 00:29:03.366 "dhchap_key": "key2", 00:29:03.366 "dhchap_ctrlr_key": "ckey1", 00:29:03.366 "method": "bdev_nvme_set_keys", 00:29:03.366 "req_id": 1 00:29:03.366 } 00:29:03.366 Got JSON-RPC error response 00:29:03.366 response: 00:29:03.366 { 00:29:03.366 "code": -13, 00:29:03.366 "message": "Permission denied" 00:29:03.366 } 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.366 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.625 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.625 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:03.626 10:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.563 rmmod nvme_tcp 00:29:04.563 rmmod nvme_fabrics 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 4062144 ']' 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 4062144 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 4062144 ']' 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 4062144 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4062144 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4062144' 00:29:04.563 killing process with pid 4062144 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 4062144 00:29:04.563 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 4062144 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.823 10:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:07.359 10:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:09.899 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:09.899 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:11.277 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:11.536 10:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.6ud /tmp/spdk.key-null.Og5 /tmp/spdk.key-sha256.qCg /tmp/spdk.key-sha384.GUK /tmp/spdk.key-sha512.fT4 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:11.536 10:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:14.069 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:14.069 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:14.069 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:14.069 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:14.069 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:14.069 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:14.069 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:14.069 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:14.069 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:14.069 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:14.069 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:14.328 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:14.328 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:14.328 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:14.328 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:14.328 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:14.328 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:14.328 00:29:14.328 real 0m54.644s 00:29:14.328 user 0m48.838s 00:29:14.328 sys 0m12.725s 00:29:14.328 10:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.328 10:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.328 ************************************ 00:29:14.328 END TEST nvmf_auth_host 00:29:14.329 ************************************ 00:29:14.329 10:57:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:14.329 10:57:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:14.329 10:57:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:14.329 10:57:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.329 10:57:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.329 ************************************ 00:29:14.329 START TEST nvmf_digest 00:29:14.329 ************************************ 00:29:14.329 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:14.588 * Looking for test storage... 00:29:14.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.588 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:14.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.589 --rc genhtml_branch_coverage=1 00:29:14.589 --rc genhtml_function_coverage=1 00:29:14.589 --rc genhtml_legend=1 00:29:14.589 --rc geninfo_all_blocks=1 00:29:14.589 --rc geninfo_unexecuted_blocks=1 00:29:14.589 00:29:14.589 ' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:14.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.589 --rc genhtml_branch_coverage=1 00:29:14.589 --rc genhtml_function_coverage=1 00:29:14.589 --rc genhtml_legend=1 00:29:14.589 --rc geninfo_all_blocks=1 00:29:14.589 --rc geninfo_unexecuted_blocks=1 00:29:14.589 00:29:14.589 ' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:14.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.589 --rc genhtml_branch_coverage=1 00:29:14.589 --rc genhtml_function_coverage=1 00:29:14.589 --rc genhtml_legend=1 00:29:14.589 --rc geninfo_all_blocks=1 00:29:14.589 --rc geninfo_unexecuted_blocks=1 00:29:14.589 00:29:14.589 ' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:14.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.589 --rc genhtml_branch_coverage=1 00:29:14.589 --rc genhtml_function_coverage=1 00:29:14.589 --rc genhtml_legend=1 00:29:14.589 --rc geninfo_all_blocks=1 00:29:14.589 --rc geninfo_unexecuted_blocks=1 00:29:14.589 00:29:14.589 ' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:14.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.589 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.590 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.590 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.590 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.590 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.590 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.590 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.590 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.590 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.590 10:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:21.158 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:21.159 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:21.159 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:21.159 Found net devices under 0000:86:00.0: cvl_0_0 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:21.159 Found net devices under 0000:86:00.1: cvl_0_1 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.159 10:57:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:21.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:29:21.159 00:29:21.159 --- 10.0.0.2 ping statistics --- 00:29:21.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.159 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:29:21.159 00:29:21.159 --- 10.0.0.1 ping statistics --- 00:29:21.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.159 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:21.159 ************************************ 00:29:21.159 START TEST nvmf_digest_clean 00:29:21.159 ************************************ 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=4076428 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 4076428 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4076428 ']' 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.159 10:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:21.159 [2024-11-19 10:57:10.350612] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:21.160 [2024-11-19 10:57:10.350656] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.160 [2024-11-19 10:57:10.431061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.160 [2024-11-19 10:57:10.473122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.160 [2024-11-19 10:57:10.473157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.160 [2024-11-19 10:57:10.473164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.160 [2024-11-19 10:57:10.473170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.160 [2024-11-19 10:57:10.473174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.160 [2024-11-19 10:57:10.473747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.418 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.418 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:21.418 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:21.418 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.418 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:21.677 null0 00:29:21.677 [2024-11-19 10:57:11.307530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.677 [2024-11-19 10:57:11.331735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4076563 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4076563 /var/tmp/bperf.sock 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4076563 ']' 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:21.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.677 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:21.677 [2024-11-19 10:57:11.384739] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:21.677 [2024-11-19 10:57:11.384780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4076563 ] 00:29:21.677 [2024-11-19 10:57:11.442846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.936 [2024-11-19 10:57:11.486506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.936 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.936 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:21.936 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:21.936 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:21.936 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:22.195 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.195 10:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.453 nvme0n1 00:29:22.453 10:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:22.453 10:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.453 Running I/O for 2 seconds... 00:29:24.760 26152.00 IOPS, 102.16 MiB/s [2024-11-19T09:57:14.552Z] 25529.00 IOPS, 99.72 MiB/s 00:29:24.760 Latency(us) 00:29:24.760 [2024-11-19T09:57:14.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.760 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:24.761 nvme0n1 : 2.00 25549.02 99.80 0.00 0.00 5005.43 2200.14 13856.18 00:29:24.761 [2024-11-19T09:57:14.553Z] =================================================================================================================== 00:29:24.761 [2024-11-19T09:57:14.553Z] Total : 25549.02 99.80 0.00 0.00 5005.43 2200.14 13856.18 00:29:24.761 { 00:29:24.761 "results": [ 00:29:24.761 { 00:29:24.761 "job": "nvme0n1", 00:29:24.761 "core_mask": "0x2", 00:29:24.761 "workload": "randread", 00:29:24.761 "status": "finished", 00:29:24.761 "queue_depth": 128, 00:29:24.761 "io_size": 4096, 00:29:24.761 "runtime": 2.003443, 00:29:24.761 "iops": 25549.01736660339, 00:29:24.761 "mibps": 99.8008490882945, 00:29:24.761 "io_failed": 0, 00:29:24.761 "io_timeout": 0, 00:29:24.761 "avg_latency_us": 5005.427490627087, 00:29:24.761 "min_latency_us": 2200.137142857143, 00:29:24.761 "max_latency_us": 13856.182857142858 00:29:24.761 } 00:29:24.761 ], 00:29:24.761 "core_count": 1 00:29:24.761 } 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:24.761 | select(.opcode=="crc32c") 00:29:24.761 | "\(.module_name) \(.executed)"' 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4076563 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4076563 ']' 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4076563 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4076563 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4076563' 00:29:24.761 killing process with pid 4076563 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4076563 00:29:24.761 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.761 00:29:24.761 Latency(us) 00:29:24.761 [2024-11-19T09:57:14.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.761 [2024-11-19T09:57:14.553Z] =================================================================================================================== 00:29:24.761 [2024-11-19T09:57:14.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.761 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4076563 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4077145 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4077145 /var/tmp/bperf.sock 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4077145 ']' 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:25.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.020 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:25.020 [2024-11-19 10:57:14.693145] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:25.020 [2024-11-19 10:57:14.693191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077145 ] 00:29:25.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:25.020 Zero copy mechanism will not be used. 00:29:25.020 [2024-11-19 10:57:14.768197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.279 [2024-11-19 10:57:14.810169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.279 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.279 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:25.279 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:25.279 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:25.279 10:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:25.538 10:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.538 10:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.797 nvme0n1 00:29:25.797 10:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:25.797 10:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:25.797 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:25.797 Zero copy mechanism will not be used. 00:29:25.797 Running I/O for 2 seconds... 00:29:27.668 5596.00 IOPS, 699.50 MiB/s [2024-11-19T09:57:17.460Z] 5747.00 IOPS, 718.38 MiB/s 00:29:27.668 Latency(us) 00:29:27.668 [2024-11-19T09:57:17.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.668 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:27.668 nvme0n1 : 2.00 5745.75 718.22 0.00 0.00 2781.95 663.16 5242.88 00:29:27.668 [2024-11-19T09:57:17.460Z] =================================================================================================================== 00:29:27.668 [2024-11-19T09:57:17.460Z] Total : 5745.75 718.22 0.00 0.00 2781.95 663.16 5242.88 00:29:27.668 { 00:29:27.668 "results": [ 00:29:27.668 { 00:29:27.668 "job": "nvme0n1", 00:29:27.668 "core_mask": "0x2", 00:29:27.668 "workload": "randread", 00:29:27.668 "status": "finished", 00:29:27.668 "queue_depth": 16, 00:29:27.668 "io_size": 131072, 00:29:27.668 "runtime": 2.003221, 00:29:27.668 "iops": 5745.7464753015265, 00:29:27.668 "mibps": 718.2183094126908, 00:29:27.668 "io_failed": 0, 00:29:27.668 "io_timeout": 0, 00:29:27.668 "avg_latency_us": 2781.948939141947, 00:29:27.668 "min_latency_us": 663.1619047619048, 00:29:27.668 "max_latency_us": 5242.88 00:29:27.668 } 00:29:27.668 ], 00:29:27.668 "core_count": 1 00:29:27.668 } 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:27.927 | select(.opcode=="crc32c") 00:29:27.927 | "\(.module_name) \(.executed)"' 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4077145 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4077145 ']' 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4077145 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.927 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077145 00:29:28.186 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:28.186 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:28.186 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077145' 00:29:28.186 killing process with pid 4077145 00:29:28.186 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4077145 00:29:28.186 Received shutdown signal, test time was about 2.000000 seconds 00:29:28.186 00:29:28.186 Latency(us) 00:29:28.186 [2024-11-19T09:57:17.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.186 [2024-11-19T09:57:17.978Z] =================================================================================================================== 00:29:28.186 [2024-11-19T09:57:17.978Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.186 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4077145 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4077633 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4077633 /var/tmp/bperf.sock 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4077633 ']' 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:28.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.187 10:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.187 [2024-11-19 10:57:17.928702] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:28.187 [2024-11-19 10:57:17.928751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077633 ] 00:29:28.445 [2024-11-19 10:57:18.004339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.445 [2024-11-19 10:57:18.041666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.445 10:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.445 10:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:28.445 10:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:28.445 10:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:28.445 10:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:28.704 10:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.704 10:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.962 nvme0n1 00:29:28.962 10:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:28.962 10:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.220 Running I/O for 2 seconds... 00:29:31.091 28523.00 IOPS, 111.42 MiB/s [2024-11-19T09:57:20.883Z] 28200.50 IOPS, 110.16 MiB/s 00:29:31.091 Latency(us) 00:29:31.091 [2024-11-19T09:57:20.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.091 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.091 nvme0n1 : 2.01 28210.46 110.20 0.00 0.00 4531.47 2215.74 12670.29 00:29:31.091 [2024-11-19T09:57:20.883Z] =================================================================================================================== 00:29:31.091 [2024-11-19T09:57:20.883Z] Total : 28210.46 110.20 0.00 0.00 4531.47 2215.74 12670.29 00:29:31.091 { 00:29:31.091 "results": [ 00:29:31.091 { 00:29:31.091 "job": "nvme0n1", 00:29:31.091 "core_mask": "0x2", 00:29:31.091 "workload": "randwrite", 00:29:31.091 "status": "finished", 00:29:31.091 "queue_depth": 128, 00:29:31.091 "io_size": 4096, 00:29:31.091 "runtime": 2.0061, 00:29:31.091 "iops": 28210.458102786502, 00:29:31.091 "mibps": 110.19710196400978, 00:29:31.091 "io_failed": 0, 00:29:31.091 "io_timeout": 0, 00:29:31.091 "avg_latency_us": 4531.470304471443, 00:29:31.091 "min_latency_us": 2215.7409523809524, 00:29:31.091 "max_latency_us": 12670.293333333333 00:29:31.091 } 00:29:31.091 ], 00:29:31.091 "core_count": 1 00:29:31.091 } 00:29:31.091 10:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:31.091 10:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:31.091 10:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:31.091 10:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:31.091 | select(.opcode=="crc32c") 00:29:31.091 | "\(.module_name) \(.executed)"' 00:29:31.091 10:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4077633 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4077633 ']' 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4077633 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077633 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077633' 00:29:31.350 killing process with pid 4077633 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4077633 00:29:31.350 Received shutdown signal, test time was about 2.000000 seconds 00:29:31.350 00:29:31.350 Latency(us) 00:29:31.350 [2024-11-19T09:57:21.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.350 [2024-11-19T09:57:21.142Z] =================================================================================================================== 00:29:31.350 [2024-11-19T09:57:21.142Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.350 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4077633 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4078144 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4078144 /var/tmp/bperf.sock 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4078144 ']' 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:31.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.609 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:31.609 [2024-11-19 10:57:21.317976] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:31.609 [2024-11-19 10:57:21.318025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078144 ] 00:29:31.609 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:31.609 Zero copy mechanism will not be used. 00:29:31.609 [2024-11-19 10:57:21.393623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.868 [2024-11-19 10:57:21.435873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.868 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.868 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:31.868 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:31.868 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:31.868 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:32.127 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.127 10:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.386 nvme0n1 00:29:32.386 10:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:32.386 10:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:32.644 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:32.644 Zero copy mechanism will not be used. 00:29:32.644 Running I/O for 2 seconds... 00:29:34.517 6468.00 IOPS, 808.50 MiB/s [2024-11-19T09:57:24.309Z] 6451.00 IOPS, 806.38 MiB/s 00:29:34.517 Latency(us) 00:29:34.517 [2024-11-19T09:57:24.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.517 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:34.517 nvme0n1 : 2.00 6446.22 805.78 0.00 0.00 2477.44 1919.27 9362.29 00:29:34.517 [2024-11-19T09:57:24.309Z] =================================================================================================================== 00:29:34.517 [2024-11-19T09:57:24.309Z] Total : 6446.22 805.78 0.00 0.00 2477.44 1919.27 9362.29 00:29:34.517 { 00:29:34.517 "results": [ 00:29:34.517 { 00:29:34.517 "job": "nvme0n1", 00:29:34.517 "core_mask": "0x2", 00:29:34.517 "workload": "randwrite", 00:29:34.517 "status": "finished", 00:29:34.517 "queue_depth": 16, 00:29:34.517 "io_size": 131072, 00:29:34.517 "runtime": 2.004429, 00:29:34.517 "iops": 6446.224835102666, 00:29:34.517 "mibps": 805.7781043878332, 00:29:34.517 "io_failed": 0, 00:29:34.517 "io_timeout": 0, 00:29:34.517 "avg_latency_us": 2477.4413892482157, 00:29:34.517 "min_latency_us": 1919.2685714285715, 00:29:34.517 "max_latency_us": 9362.285714285714 00:29:34.517 } 00:29:34.517 ], 00:29:34.517 "core_count": 1 00:29:34.517 } 00:29:34.517 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:34.517 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:34.517 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:34.517 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:34.517 | select(.opcode=="crc32c") 00:29:34.517 | "\(.module_name) \(.executed)"' 00:29:34.517 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4078144 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4078144 ']' 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4078144 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078144 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078144' 00:29:34.777 killing process with pid 4078144 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4078144 00:29:34.777 Received shutdown signal, test time was about 2.000000 seconds 00:29:34.777 00:29:34.777 Latency(us) 00:29:34.777 [2024-11-19T09:57:24.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.777 [2024-11-19T09:57:24.569Z] =================================================================================================================== 00:29:34.777 [2024-11-19T09:57:24.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.777 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4078144 00:29:35.036 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4076428 00:29:35.036 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4076428 ']' 00:29:35.036 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4076428 00:29:35.036 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:35.036 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.036 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4076428 00:29:35.036 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:35.036 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:35.036 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4076428' 00:29:35.036 killing process with pid 4076428 00:29:35.036 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4076428 00:29:35.036 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4076428 00:29:35.296 00:29:35.296 real 0m14.568s 00:29:35.296 user 0m27.288s 00:29:35.296 sys 0m4.648s 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:35.296 ************************************ 00:29:35.296 END TEST nvmf_digest_clean 00:29:35.296 ************************************ 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:35.296 ************************************ 00:29:35.296 START TEST nvmf_digest_error 00:29:35.296 ************************************ 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=4078829 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 4078829 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4078829 ']' 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.296 10:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.296 [2024-11-19 10:57:24.979308] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:35.296 [2024-11-19 10:57:24.979346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.297 [2024-11-19 10:57:25.054876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.556 [2024-11-19 10:57:25.096086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.556 [2024-11-19 10:57:25.096118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.556 [2024-11-19 10:57:25.096125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.556 [2024-11-19 10:57:25.096131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.556 [2024-11-19 10:57:25.096136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.556 [2024-11-19 10:57:25.096702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.556 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.557 [2024-11-19 10:57:25.169137] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.557 null0 00:29:35.557 [2024-11-19 10:57:25.259215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.557 [2024-11-19 10:57:25.283407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4078848 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4078848 /var/tmp/bperf.sock 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4078848 ']' 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:35.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.557 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.557 [2024-11-19 10:57:25.333223] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:35.557 [2024-11-19 10:57:25.333262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078848 ] 00:29:35.815 [2024-11-19 10:57:25.407322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.815 [2024-11-19 10:57:25.449235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.815 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.815 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:35.815 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:35.815 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:36.075 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:36.075 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.075 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.075 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.075 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.075 10:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.333 nvme0n1 00:29:36.333 10:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:36.333 10:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.333 10:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.333 10:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.333 10:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:36.333 10:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:36.593 Running I/O for 2 seconds... 00:29:36.593 [2024-11-19 10:57:26.148689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.148727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.148737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.157343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.157367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.157377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.169053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.169074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.169083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.177440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.177462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.177471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.189551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.189572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.189580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.202173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.202194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.202208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.212952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.212972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.212980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.225065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.225084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.225092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.232786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.232807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.232815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.244198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.244222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.244230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.256668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.256688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.256696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.268856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.268875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.268883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.281143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.281163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.281171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.289263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.289282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.289290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.300712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.300732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.300743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.313048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.313067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.313075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.325477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.325496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.325504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.337980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.337999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.338008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.350108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.350129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.350137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.362338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.362358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.362366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.593 [2024-11-19 10:57:26.370704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.593 [2024-11-19 10:57:26.370724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.593 [2024-11-19 10:57:26.370732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.383273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.383303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.383311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.395477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.395497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.395504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.407900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.407924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.407932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.419003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.419023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.419031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.427462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.427482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.427490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.438111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.438129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.438137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.445809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.445829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.445836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.456424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.456444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.456452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.464857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.464879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.464887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.475979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.475999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.476008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.483624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.483644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.483652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.495213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.495234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.495241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.507690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.507709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.507717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.517763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.517783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.517791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.527179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.527199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.527214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.535428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.535448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.535455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.544988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.545007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.545016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.555052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.555071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.555079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.562984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.563003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.563011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.575699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.575719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.575733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.583989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.584009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.584017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.595943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.595964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.595972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.608002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.608022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.608030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.620472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.620492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.620500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.853 [2024-11-19 10:57:26.632473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.853 [2024-11-19 10:57:26.632492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.853 [2024-11-19 10:57:26.632500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.854 [2024-11-19 10:57:26.640823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:36.854 [2024-11-19 10:57:26.640842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.854 [2024-11-19 10:57:26.640853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.113 [2024-11-19 10:57:26.652749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.113 [2024-11-19 10:57:26.652769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.113 [2024-11-19 10:57:26.652777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.113 [2024-11-19 10:57:26.664937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.113 [2024-11-19 10:57:26.664957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.113 [2024-11-19 10:57:26.664966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.113 [2024-11-19 10:57:26.677838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.113 [2024-11-19 10:57:26.677861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.113 [2024-11-19 10:57:26.677870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.113 [2024-11-19 10:57:26.688687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.113 [2024-11-19 10:57:26.688706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.113 [2024-11-19 10:57:26.688713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.113 [2024-11-19 10:57:26.697356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.113 [2024-11-19 10:57:26.697375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.113 [2024-11-19 10:57:26.697383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.113 [2024-11-19 10:57:26.709704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.113 [2024-11-19 10:57:26.709723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.113 [2024-11-19 10:57:26.709731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.113 [2024-11-19 10:57:26.721638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.113 [2024-11-19 10:57:26.721658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.113 [2024-11-19 10:57:26.721666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.731804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.731824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.731832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.740794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.740814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.740823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.751163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.751183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.751191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.761486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.761505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.761516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.769640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.769660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.769668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.779936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.779956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.779964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.789027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.789046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.789055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.798173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.798193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.798200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.808544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.808564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.808572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.817147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.817167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.817175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.829889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.829909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.829917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.840817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.840836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.840844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.849656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.849678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.849686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.861207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.861227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.861235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.869566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.869585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.869593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.880741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.880761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.880769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.114 [2024-11-19 10:57:26.893035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.114 [2024-11-19 10:57:26.893054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.114 [2024-11-19 10:57:26.893063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:26.905639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:26.905659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:26.905668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:26.916438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:26.916457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:26.916465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:26.925081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:26.925102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:26.925110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:26.934588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:26.934610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:26.934618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:26.944391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:26.944412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:26.944420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:26.953024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:26.953043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:26.953052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:26.963720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:26.963741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:26.963748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:26.976340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:26.976363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:26.976370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:26.988486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:26.988508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:26.988516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:27.000818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:27.000840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:27.000848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:27.011597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:27.011617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:27.011625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:27.019475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.374 [2024-11-19 10:57:27.019495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.374 [2024-11-19 10:57:27.019503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.374 [2024-11-19 10:57:27.030071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.030092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.030103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.040566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.040586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.040594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.048664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.048683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.048691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.061019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.061038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.061046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.071348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.071368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.071376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.081564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.081584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.081591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.092422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.092442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.092450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.100310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.100330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.100338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.110701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.110722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.110730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.119826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.119850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.119858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.129036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.129056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.129064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 24013.00 IOPS, 93.80 MiB/s [2024-11-19T09:57:27.167Z] [2024-11-19 10:57:27.139688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.139709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.139717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.150849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.150869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.150877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.375 [2024-11-19 10:57:27.161131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.375 [2024-11-19 10:57:27.161151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.375 [2024-11-19 10:57:27.161159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.169059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.169079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.169088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.179315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.179335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.179343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.190155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.190175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.190183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.198762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.198783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.198795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.210326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.210346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.210354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.222301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.222322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.222330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.230253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.230274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.230282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.240641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.240663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.240671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.251838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.251858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.251866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.260265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.260285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.260293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.270995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.271014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.271022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.281950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.281971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.281979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.290708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.290731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.290740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.301751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.301772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.301780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.312605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.312626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.312633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.322565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.322585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.322593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.330360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.330380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.330388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.341824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.341843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.341852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.352558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.352579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.352587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.360271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.360291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.360299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.370360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.370379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.370387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.382965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.382985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.382993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.391163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.391182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.391190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.402230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.402250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.635 [2024-11-19 10:57:27.402258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.635 [2024-11-19 10:57:27.411377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.635 [2024-11-19 10:57:27.411395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.636 [2024-11-19 10:57:27.411403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.636 [2024-11-19 10:57:27.421620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.636 [2024-11-19 10:57:27.421639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.636 [2024-11-19 10:57:27.421647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.430992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.431014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.431023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.440309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.440330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.440337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.449598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.449618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.449627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.458771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.458790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.458802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.468001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.468021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.468029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.477157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.477177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.477185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.486271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.486291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.486299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.496617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.496639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.496647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.504157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.504177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.504185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.514773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.514793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.514802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.523897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.523916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.523924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.532897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.532916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.896 [2024-11-19 10:57:27.532924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.896 [2024-11-19 10:57:27.541342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.896 [2024-11-19 10:57:27.541364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.541372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.551068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.551088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.551098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.560724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.560743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.560751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.570512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.570531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.570539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.578945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.578964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.578972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.590090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.590110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.590118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.600032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.600051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.600058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.613164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.613185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.613193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.621392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.621412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.621424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.631662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.631681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.631689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.642834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.642853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.642862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.650676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.650695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.650703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.661670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.661690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.661698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.671618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.671637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.671645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.897 [2024-11-19 10:57:27.680080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:37.897 [2024-11-19 10:57:27.680099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.897 [2024-11-19 10:57:27.680107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.692396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.692417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.692426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.704745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.704765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.704773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.717255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.717278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.717286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.729193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.729219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.729226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.737265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.737284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.737292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.748249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.748270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.748278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.760263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.760284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.760291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.771277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.771296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.771304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.781238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.781259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.781267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.790772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.790791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.790799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.799043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.799062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.799070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.808186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.808211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.808219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.818740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.818760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.818768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.830157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.830177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.830185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.843510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.843530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.843538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.851464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.851483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.851491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.861942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.861961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.861969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.873361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.873381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.873389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.885395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.885415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.885423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.893036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.893057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.893071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.903133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.903154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.903161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.915111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.915131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.915140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.926932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.926952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.156 [2024-11-19 10:57:27.926961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.156 [2024-11-19 10:57:27.934300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.156 [2024-11-19 10:57:27.934319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.157 [2024-11-19 10:57:27.934329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:27.946221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:27.946243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:27.946251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:27.956942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:27.956963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:27.956971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:27.967737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:27.967758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:27.967766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:27.977245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:27.977264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:27.977273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:27.988303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:27.988327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:27.988335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:27.998337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:27.998357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:27.998364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.006555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.006574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.006582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.015862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.015881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.015889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.024951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.024970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.024979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.035220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.035239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.035247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.043843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.043861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.043869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.053868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.053887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.053895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.063754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.063775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.063782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.074090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.074118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.074125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.083928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.083948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.083955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.092094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.092113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.092121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.102139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.102158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.102166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.111798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.416 [2024-11-19 10:57:28.111818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.416 [2024-11-19 10:57:28.111825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.416 [2024-11-19 10:57:28.121159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.417 [2024-11-19 10:57:28.121178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.417 [2024-11-19 10:57:28.121186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.417 [2024-11-19 10:57:28.129296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b6370) 00:29:38.417 [2024-11-19 10:57:28.129315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.417 [2024-11-19 10:57:28.129322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.417 24763.50 IOPS, 96.73 MiB/s 00:29:38.417 Latency(us) 00:29:38.417 [2024-11-19T09:57:28.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.417 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:38.417 nvme0n1 : 2.00 24778.82 96.79 0.00 0.00 5160.20 2559.02 18474.91 00:29:38.417 [2024-11-19T09:57:28.209Z] =================================================================================================================== 00:29:38.417 [2024-11-19T09:57:28.209Z] Total : 24778.82 96.79 0.00 0.00 5160.20 2559.02 18474.91 00:29:38.417 { 00:29:38.417 "results": [ 00:29:38.417 { 00:29:38.417 "job": "nvme0n1", 00:29:38.417 "core_mask": "0x2", 00:29:38.417 "workload": "randread", 00:29:38.417 "status": "finished", 00:29:38.417 "queue_depth": 128, 00:29:38.417 "io_size": 4096, 00:29:38.417 "runtime": 2.003929, 00:29:38.417 "iops": 24778.822004172802, 00:29:38.417 "mibps": 96.79227345380001, 00:29:38.417 "io_failed": 0, 00:29:38.417 "io_timeout": 0, 00:29:38.417 "avg_latency_us": 5160.195678869917, 00:29:38.417 "min_latency_us": 2559.024761904762, 00:29:38.417 "max_latency_us": 18474.910476190475 00:29:38.417 } 00:29:38.417 ], 00:29:38.417 "core_count": 1 00:29:38.417 } 00:29:38.417 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:38.417 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:38.417 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:38.417 | .driver_specific 00:29:38.417 | .nvme_error 00:29:38.417 | .status_code 00:29:38.417 | .command_transient_transport_error' 00:29:38.417 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4078848 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4078848 ']' 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4078848 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078848 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078848' 00:29:38.676 killing process with pid 4078848 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4078848 00:29:38.676 Received shutdown signal, test time was about 2.000000 seconds 00:29:38.676 00:29:38.676 Latency(us) 00:29:38.676 [2024-11-19T09:57:28.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.676 [2024-11-19T09:57:28.468Z] =================================================================================================================== 00:29:38.676 [2024-11-19T09:57:28.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.676 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4078848 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4079424 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4079424 /var/tmp/bperf.sock 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4079424 ']' 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.935 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.935 [2024-11-19 10:57:28.617665] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:38.935 [2024-11-19 10:57:28.617716] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079424 ] 00:29:38.935 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.935 Zero copy mechanism will not be used. 00:29:38.935 [2024-11-19 10:57:28.693002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.194 [2024-11-19 10:57:28.732912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.194 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.194 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:39.194 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:39.194 10:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:39.451 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:39.451 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.451 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.451 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.451 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.451 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.709 nvme0n1 00:29:39.709 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:39.709 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.709 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.709 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.709 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:39.709 10:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:39.709 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:39.709 Zero copy mechanism will not be used. 00:29:39.709 Running I/O for 2 seconds... 00:29:39.709 [2024-11-19 10:57:29.489251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.709 [2024-11-19 10:57:29.489290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.709 [2024-11-19 10:57:29.489300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.709 [2024-11-19 10:57:29.494720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.709 [2024-11-19 10:57:29.494745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.709 [2024-11-19 10:57:29.494754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.968 [2024-11-19 10:57:29.499978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.968 [2024-11-19 10:57:29.500001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.968 [2024-11-19 10:57:29.500010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.968 [2024-11-19 10:57:29.505243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.968 [2024-11-19 10:57:29.505266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.968 [2024-11-19 10:57:29.505274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.968 [2024-11-19 10:57:29.510467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.968 [2024-11-19 10:57:29.510488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.968 [2024-11-19 10:57:29.510496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.968 [2024-11-19 10:57:29.515719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.968 [2024-11-19 10:57:29.515741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.968 [2024-11-19 10:57:29.515749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.968 [2024-11-19 10:57:29.520935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.520957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.520965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.526132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.526153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.526161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.531369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.531391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.531399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.536591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.536614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.536622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.541790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.541812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.541820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.547071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.547094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.547102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.552262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.552284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.552292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.557528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.557550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.557558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.562700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.562720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.562727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.567891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.567913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.567920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.573045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.573066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.573074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.578247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.578268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.578280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.583477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.583499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.583507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.588697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.588718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.588726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.593931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.593952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.593960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.599117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.599138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.599146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.604296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.604317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.604324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.609538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.609560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.609567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.614818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.614839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.614847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.620043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.620064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.620072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.625307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.625332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.625339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.630546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.630567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.630575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.635772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.635793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.635801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.640987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.641008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.641016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.646196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.646226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.646234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.969 [2024-11-19 10:57:29.651383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.969 [2024-11-19 10:57:29.651405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.969 [2024-11-19 10:57:29.651413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.656592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.656613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.656621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.661815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.661836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.661845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.666998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.667019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.667033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.672198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.672225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.672233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.677406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.677427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.677434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.682673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.682694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.682702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.687871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.687891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.687899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.693084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.693105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.693112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.698247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.698269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.698276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.703402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.703423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.703430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.708603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.708624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.708632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.713809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.713833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.713841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.719027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.719047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.719055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.724212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.724232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.724240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.729369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.729390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.729399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.734604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.734624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.734632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.739854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.739875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.739882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.745003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.745023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.745032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.750253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.750274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.750282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.970 [2024-11-19 10:57:29.755487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:39.970 [2024-11-19 10:57:29.755509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.970 [2024-11-19 10:57:29.755518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.230 [2024-11-19 10:57:29.760743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.230 [2024-11-19 10:57:29.760765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.230 [2024-11-19 10:57:29.760773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.230 [2024-11-19 10:57:29.765964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.230 [2024-11-19 10:57:29.765987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.230 [2024-11-19 10:57:29.765995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.230 [2024-11-19 10:57:29.771153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.230 [2024-11-19 10:57:29.771175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.230 [2024-11-19 10:57:29.771184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.230 [2024-11-19 10:57:29.776412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.230 [2024-11-19 10:57:29.776433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.230 [2024-11-19 10:57:29.776441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.230 [2024-11-19 10:57:29.781657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.230 [2024-11-19 10:57:29.781679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.230 [2024-11-19 10:57:29.781687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.786822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.786842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.786850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.792084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.792105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.792113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.798086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.798110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.798118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.803778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.803800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.803812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.809872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.809894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.809903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.816190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.816219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.816227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.823403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.823428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.823437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.830787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.830811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.830819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.838214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.838236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.838246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.842278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.842301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.842312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.848839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.848863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.848871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.856653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.856674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.856682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.864141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.864168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.864176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.871390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.871411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.871419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.878979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.879001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.879009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.886688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.886710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.886719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.894211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.894239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.894247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.901859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.901881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.901889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.909335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.909356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.909365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.917333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.917355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.917363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.924871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.924893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.924901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.931828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.931850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.931858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.939557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.939578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.939587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.946922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.946944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.946952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.954083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.954104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.954112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.962029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.962052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.231 [2024-11-19 10:57:29.962060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.231 [2024-11-19 10:57:29.970348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.231 [2024-11-19 10:57:29.970370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.232 [2024-11-19 10:57:29.970378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.232 [2024-11-19 10:57:29.978281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.232 [2024-11-19 10:57:29.978304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.232 [2024-11-19 10:57:29.978312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.232 [2024-11-19 10:57:29.985185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.232 [2024-11-19 10:57:29.985212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.232 [2024-11-19 10:57:29.985220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.232 [2024-11-19 10:57:29.991710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.232 [2024-11-19 10:57:29.991730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.232 [2024-11-19 10:57:29.991742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.232 [2024-11-19 10:57:29.997680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.232 [2024-11-19 10:57:29.997702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.232 [2024-11-19 10:57:29.997710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.232 [2024-11-19 10:57:30.002954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.232 [2024-11-19 10:57:30.002975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.232 [2024-11-19 10:57:30.002983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.232 [2024-11-19 10:57:30.008810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.232 [2024-11-19 10:57:30.008853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.232 [2024-11-19 10:57:30.008872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.232 [2024-11-19 10:57:30.014498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.232 [2024-11-19 10:57:30.014522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.232 [2024-11-19 10:57:30.014531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.492 [2024-11-19 10:57:30.020031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.492 [2024-11-19 10:57:30.020054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.020064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.025445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.025469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.025477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.031812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.031837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.031846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.038210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.038234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.038243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.043231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.043254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.043263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.048078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.048100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.048109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.053367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.053390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.053398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.059078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.059101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.059109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.064170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.064193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.064206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.069463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.069485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.069493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.075018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.075041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.075049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.080945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.080967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.080976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.086007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.086029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.086043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.090970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.090991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.090999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.095986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.096008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.096017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.101138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.101159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.101167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.106370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.106392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.106401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.111684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.111705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.111713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.117004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.117026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.117034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.122256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.122277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.122286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.127542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.127563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.127571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.132872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.132897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.132905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.138177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.138198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.138214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.143435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.143457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.143465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.148788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.148810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.148818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.154114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.493 [2024-11-19 10:57:30.154136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.493 [2024-11-19 10:57:30.154144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.493 [2024-11-19 10:57:30.159388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.159410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.159417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.164633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.164654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.164663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.169904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.169926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.169935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.174888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.174909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.174917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.179995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.180016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.180024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.185112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.185133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.185141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.190188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.190216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.190224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.195472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.195494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.195502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.200492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.200513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.200521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.205624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.205645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.205653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.210980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.211001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.211009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.216284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.216305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.216313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.221550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.221571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.221583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.226858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.226881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.226889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.232232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.232253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.232261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.237518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.237538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.237546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.242987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.243007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.243015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.248428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.248449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.248457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.253887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.253908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.253916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.259351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.259372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.259380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.264873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.264894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.264902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.270327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.270352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.270360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.494 [2024-11-19 10:57:30.275776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.494 [2024-11-19 10:57:30.275797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.494 [2024-11-19 10:57:30.275805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.754 [2024-11-19 10:57:30.281270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.754 [2024-11-19 10:57:30.281294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.754 [2024-11-19 10:57:30.281303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.754 [2024-11-19 10:57:30.286870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.754 [2024-11-19 10:57:30.286893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.754 [2024-11-19 10:57:30.286901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.754 [2024-11-19 10:57:30.292777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.754 [2024-11-19 10:57:30.292798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.754 [2024-11-19 10:57:30.292806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.754 [2024-11-19 10:57:30.298302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.754 [2024-11-19 10:57:30.298322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.754 [2024-11-19 10:57:30.298330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.754 [2024-11-19 10:57:30.304242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.754 [2024-11-19 10:57:30.304264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.754 [2024-11-19 10:57:30.304272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.754 [2024-11-19 10:57:30.309728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.754 [2024-11-19 10:57:30.309748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.754 [2024-11-19 10:57:30.309758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.754 [2024-11-19 10:57:30.315089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.754 [2024-11-19 10:57:30.315111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.754 [2024-11-19 10:57:30.315119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.754 [2024-11-19 10:57:30.320460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.754 [2024-11-19 10:57:30.320481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.754 [2024-11-19 10:57:30.320489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.754 [2024-11-19 10:57:30.325720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.754 [2024-11-19 10:57:30.325741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.754 [2024-11-19 10:57:30.325750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.754 [2024-11-19 10:57:30.331048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.754 [2024-11-19 10:57:30.331069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.754 [2024-11-19 10:57:30.331077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.754 [2024-11-19 10:57:30.336318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.336340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.336348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.341774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.341795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.341803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.347200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.347227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.347235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.352783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.352804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.352812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.358453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.358473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.358481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.363910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.363935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.363943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.369305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.369326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.369334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.374625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.374646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.374654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.380081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.380102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.380110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.385514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.385534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.385542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.390903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.390923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.390931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.396421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.396442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.396450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.401907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.401927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.401935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.407350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.407371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.407380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.412954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.412975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.412983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.418497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.418519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.418526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.424147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.424168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.424176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.429666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.429686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.429694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.434984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.435004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.435012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.440183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.440209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.440217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.445513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.445534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.445542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.450709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.450729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.450737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.456018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.456039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.456050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.461432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.461453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.461461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.466984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.467005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.467013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.472507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.472528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.472535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.478006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.755 [2024-11-19 10:57:30.478027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.755 [2024-11-19 10:57:30.478035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.755 [2024-11-19 10:57:30.483311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.756 [2024-11-19 10:57:30.483332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.756 [2024-11-19 10:57:30.483340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.756 5515.00 IOPS, 689.38 MiB/s [2024-11-19T09:57:30.548Z] [2024-11-19 10:57:30.489589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.756 [2024-11-19 10:57:30.489610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.756 [2024-11-19 10:57:30.489618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.756 [2024-11-19 10:57:30.494811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.756 [2024-11-19 10:57:30.494832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.756 [2024-11-19 10:57:30.494841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.756 [2024-11-19 10:57:30.500197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.756 [2024-11-19 10:57:30.500224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.756 [2024-11-19 10:57:30.500232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.756 [2024-11-19 10:57:30.505999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.756 [2024-11-19 10:57:30.506023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.756 [2024-11-19 10:57:30.506031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.756 [2024-11-19 10:57:30.511298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.756 [2024-11-19 10:57:30.511319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.756 [2024-11-19 10:57:30.511327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.756 [2024-11-19 10:57:30.516666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.756 [2024-11-19 10:57:30.516687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.756 [2024-11-19 10:57:30.516695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.756 [2024-11-19 10:57:30.522170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.756 [2024-11-19 10:57:30.522192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.756 [2024-11-19 10:57:30.522200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.756 [2024-11-19 10:57:30.527543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.756 [2024-11-19 10:57:30.527565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.756 [2024-11-19 10:57:30.527573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.756 [2024-11-19 10:57:30.533097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.756 [2024-11-19 10:57:30.533117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.756 [2024-11-19 10:57:30.533125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.756 [2024-11-19 10:57:30.538580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:40.756 [2024-11-19 10:57:30.538603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.756 [2024-11-19 10:57:30.538611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.543891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.543913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.543921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.549285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.549307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.549315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.554781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.554802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.554810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.560186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.560215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.560223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.565589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.565610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.565618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.571041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.571062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.571069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.576566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.576588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.576596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.582485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.582506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.582514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.585393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.585414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.585421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.590731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.590752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.590760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.596220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.596246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.596254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.601814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.601835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.601843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.607220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.607240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.607248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.612669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.612689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.612697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.617840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.617861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.617868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.016 [2024-11-19 10:57:30.623293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.016 [2024-11-19 10:57:30.623314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.016 [2024-11-19 10:57:30.623322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.628577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.628599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.628607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.634051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.634073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.634081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.639707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.639728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.639736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.645237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.645258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.645266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.650761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.650782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.650790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.656088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.656108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.656115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.661378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.661399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.661406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.666683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.666703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.666711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.672069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.672090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.672097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.677440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.677460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.677468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.682809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.682829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.682837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.688180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.688206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.688217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.693652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.693673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.693682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.699025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.699046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.699054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.704370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.704391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.704398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.709917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.709938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.709945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.715386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.715407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.715415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.720651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.720671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.720679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.726158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.726178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.726186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.732111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.732132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.732140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.737514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.737539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.737547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.743036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.743056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.743064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.748123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.748144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.748152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.753504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.753526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.753534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.758381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.758403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.758412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.761842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.761863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.017 [2024-11-19 10:57:30.761871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.017 [2024-11-19 10:57:30.766026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.017 [2024-11-19 10:57:30.766048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.018 [2024-11-19 10:57:30.766056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.018 [2024-11-19 10:57:30.771179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.018 [2024-11-19 10:57:30.771200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.018 [2024-11-19 10:57:30.771213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.018 [2024-11-19 10:57:30.776740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.018 [2024-11-19 10:57:30.776761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.018 [2024-11-19 10:57:30.776770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.018 [2024-11-19 10:57:30.781905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.018 [2024-11-19 10:57:30.781926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.018 [2024-11-19 10:57:30.781934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.018 [2024-11-19 10:57:30.787353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.018 [2024-11-19 10:57:30.787374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.018 [2024-11-19 10:57:30.787382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.018 [2024-11-19 10:57:30.792216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.018 [2024-11-19 10:57:30.792236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.018 [2024-11-19 10:57:30.792244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.018 [2024-11-19 10:57:30.797356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.018 [2024-11-19 10:57:30.797376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.018 [2024-11-19 10:57:30.797383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.018 [2024-11-19 10:57:30.802609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.018 [2024-11-19 10:57:30.802631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.018 [2024-11-19 10:57:30.802640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.807768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.807790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.807798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.812357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.812378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.812387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.815326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.815346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.815354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.820555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.820576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.820587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.826244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.826264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.826272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.831713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.831733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.831741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.837041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.837061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.837069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.841517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.841536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.841544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.846807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.846827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.846835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.851635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.851655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.851663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.856577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.856597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.856605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.861772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.861792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.861800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.866896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.866916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.866924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.872079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.872100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.872108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.877323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.877343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.877351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.882587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.882608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.882616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.888335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.888355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.888363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.893710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.893731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.893739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.899020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.899041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.899049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.904145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.904165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.904173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.909345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.909365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.909376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.914442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.278 [2024-11-19 10:57:30.914462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-11-19 10:57:30.914470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.278 [2024-11-19 10:57:30.919847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.919867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.919875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.924020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.924041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.924049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.926946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.926965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.926973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.931961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.931982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.931990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.936995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.937015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.937023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.942151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.942173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.942181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.947115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.947138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.947147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.951757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.951782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.951790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.956778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.956798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.956806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.961719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.961739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.961746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.966648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.966669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.966677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.971769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.971789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.971797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.977044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.977065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.977072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.982453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.982473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.982480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.987900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.987920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.987928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.993665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.993687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.993695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:30.999136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:30.999155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:30.999163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:31.004502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:31.004522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:31.004530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:31.009786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:31.009807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:31.009815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:31.015067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:31.015087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:31.015094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:31.020391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:31.020412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:31.020420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:31.025820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:31.025841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:31.025849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:31.031229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:31.031250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:31.031257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:31.036646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:31.036667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:31.036675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:31.042435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:31.042456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:31.042467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:31.047976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:31.047997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:31.048005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:31.053535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.279 [2024-11-19 10:57:31.053556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.279 [2024-11-19 10:57:31.053564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.279 [2024-11-19 10:57:31.058980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.280 [2024-11-19 10:57:31.059000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.280 [2024-11-19 10:57:31.059008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.280 [2024-11-19 10:57:31.064267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.280 [2024-11-19 10:57:31.064294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.280 [2024-11-19 10:57:31.064306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.539 [2024-11-19 10:57:31.069600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.539 [2024-11-19 10:57:31.069623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.539 [2024-11-19 10:57:31.069631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.539 [2024-11-19 10:57:31.074899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.539 [2024-11-19 10:57:31.074921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.539 [2024-11-19 10:57:31.074929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.539 [2024-11-19 10:57:31.080134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.539 [2024-11-19 10:57:31.080155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.539 [2024-11-19 10:57:31.080163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.539 [2024-11-19 10:57:31.085366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.539 [2024-11-19 10:57:31.085387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.539 [2024-11-19 10:57:31.085395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.539 [2024-11-19 10:57:31.090461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.090482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.090491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.095793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.095813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.095821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.101085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.101106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.101114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.106624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.106645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.106653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.112092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.112115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.112122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.118461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.118483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.118491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.123686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.123711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.123719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.128442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.128462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.128471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.133392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.133414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.133425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.138368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.138389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.138397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.143494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.143516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.143525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.148526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.148546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.148554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.153626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.153647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.153655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.158845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.158865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.158873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.164012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.164032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.164040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.169310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.169331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.169339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.174703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.174723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.174731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.180055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.180079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.180087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.185346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.185367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.185375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.190700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.190721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.190728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.196188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.196233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.196242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.201630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.201651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.201659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.207316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.207337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.207344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.212922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.212942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.212951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.218350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.218371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.218379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.223713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.540 [2024-11-19 10:57:31.223734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.540 [2024-11-19 10:57:31.223742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.540 [2024-11-19 10:57:31.229083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.229103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.229111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.234507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.234529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.234537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.239898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.239921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.239929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.245421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.245442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.245451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.250730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.250753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.250761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.256310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.256332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.256341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.261923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.261945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.261953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.267377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.267399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.267407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.272841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.272864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.272876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.278214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.278235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.278242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.283641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.283664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.283672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.289059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.289081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.289089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.294438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.294460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.294468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.299886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.299907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.299915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.305251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.305272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.305280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.310334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.310354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.310363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.315342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.315364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.315371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.320398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.320423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.320431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.541 [2024-11-19 10:57:31.325652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.541 [2024-11-19 10:57:31.325674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.541 [2024-11-19 10:57:31.325682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.330346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.330369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.330377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.333515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.333536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.333544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.338763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.338785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.338793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.344019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.344040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.344048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.349087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.349107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.349116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.354259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.354279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.354287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.359494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.359515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.359523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.364736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.364756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.364764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.369854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.369876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.369884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.375062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.375082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.375090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.380350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.380370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.380378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.385490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.385512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.385520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.801 [2024-11-19 10:57:31.390728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.801 [2024-11-19 10:57:31.390749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.801 [2024-11-19 10:57:31.390756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.395997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.396018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.396026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.401303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.401324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.401333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.406543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.406563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.406574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.411776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.411796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.411804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.416937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.416958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.416966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.421803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.421825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.421833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.427078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.427099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.427107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.432397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.432418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.432426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.437695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.437716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.437724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.442616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.442637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.442645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.447886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.447908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.447916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.453121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.453143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.453150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.458404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.458426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.458433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.463589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.463610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.463617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.468841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.468862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.468869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.474038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.474059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.474067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.479185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.479214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.479222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.484385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.484406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.484414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.802 [2024-11-19 10:57:31.489643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcf580) 00:29:41.802 [2024-11-19 10:57:31.489665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.802 [2024-11-19 10:57:31.489672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.802 5724.00 IOPS, 715.50 MiB/s 00:29:41.802 Latency(us) 00:29:41.802 [2024-11-19T09:57:31.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.802 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:41.802 nvme0n1 : 2.00 5723.49 715.44 0.00 0.00 2792.90 600.75 9112.62 00:29:41.802 [2024-11-19T09:57:31.594Z] =================================================================================================================== 00:29:41.802 [2024-11-19T09:57:31.594Z] Total : 5723.49 715.44 0.00 0.00 2792.90 600.75 9112.62 00:29:41.802 { 00:29:41.802 "results": [ 00:29:41.802 { 00:29:41.802 "job": "nvme0n1", 00:29:41.802 "core_mask": "0x2", 00:29:41.802 "workload": "randread", 00:29:41.802 "status": "finished", 00:29:41.802 "queue_depth": 16, 00:29:41.802 "io_size": 131072, 00:29:41.802 "runtime": 2.002974, 00:29:41.802 "iops": 5723.489171601828, 00:29:41.802 "mibps": 715.4361464502285, 00:29:41.802 "io_failed": 0, 00:29:41.802 "io_timeout": 0, 00:29:41.802 "avg_latency_us": 2792.895432160303, 00:29:41.802 "min_latency_us": 600.7466666666667, 00:29:41.802 "max_latency_us": 9112.624761904763 00:29:41.802 } 00:29:41.802 ], 00:29:41.802 "core_count": 1 00:29:41.802 } 00:29:41.802 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:41.802 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:41.802 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:41.802 | .driver_specific 00:29:41.802 | .nvme_error 00:29:41.802 | .status_code 00:29:41.802 | .command_transient_transport_error' 00:29:41.802 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 370 > 0 )) 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4079424 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4079424 ']' 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4079424 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4079424 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4079424' 00:29:42.061 killing process with pid 4079424 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4079424 00:29:42.061 Received shutdown signal, test time was about 2.000000 seconds 00:29:42.061 00:29:42.061 Latency(us) 00:29:42.061 [2024-11-19T09:57:31.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.061 [2024-11-19T09:57:31.853Z] =================================================================================================================== 00:29:42.061 [2024-11-19T09:57:31.853Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.061 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4079424 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4080011 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4080011 /var/tmp/bperf.sock 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4080011 ']' 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.320 10:57:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.320 [2024-11-19 10:57:31.963817] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:42.320 [2024-11-19 10:57:31.963863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080011 ] 00:29:42.320 [2024-11-19 10:57:32.038351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.320 [2024-11-19 10:57:32.079883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.579 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.579 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:42.579 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:42.579 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:42.579 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:42.579 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.579 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.579 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.579 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.579 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:43.147 nvme0n1 00:29:43.147 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:43.147 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.147 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.147 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.147 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:43.147 10:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:43.147 Running I/O for 2 seconds... 00:29:43.147 [2024-11-19 10:57:32.812363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ee5c8 00:29:43.147 [2024-11-19 10:57:32.813245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.147 [2024-11-19 10:57:32.813274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:43.147 [2024-11-19 10:57:32.820871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e7818 00:29:43.147 [2024-11-19 10:57:32.821647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.147 [2024-11-19 10:57:32.821669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:43.147 [2024-11-19 10:57:32.830211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eaab8 00:29:43.147 [2024-11-19 10:57:32.830972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.147 [2024-11-19 10:57:32.830992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:43.147 [2024-11-19 10:57:32.838765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e1b48 00:29:43.147 [2024-11-19 10:57:32.839442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.147 [2024-11-19 10:57:32.839461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:43.147 [2024-11-19 10:57:32.848440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e88f8 00:29:43.147 [2024-11-19 10:57:32.849110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.148 [2024-11-19 10:57:32.849129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:43.148 [2024-11-19 10:57:32.857262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e5220 00:29:43.148 [2024-11-19 10:57:32.858054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.148 [2024-11-19 10:57:32.858073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.148 [2024-11-19 10:57:32.866311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e5220 00:29:43.148 [2024-11-19 10:57:32.867108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.148 [2024-11-19 10:57:32.867126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.148 [2024-11-19 10:57:32.875276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e5220 00:29:43.148 [2024-11-19 10:57:32.875975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.148 [2024-11-19 10:57:32.875995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.148 [2024-11-19 10:57:32.884244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e9168 00:29:43.148 [2024-11-19 10:57:32.884903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.148 [2024-11-19 10:57:32.884926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:43.148 [2024-11-19 10:57:32.894362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ef6a8 00:29:43.148 [2024-11-19 10:57:32.895568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.148 [2024-11-19 10:57:32.895586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:43.148 [2024-11-19 10:57:32.902605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166de038 00:29:43.148 [2024-11-19 10:57:32.903811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.148 [2024-11-19 10:57:32.903830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:43.148 [2024-11-19 10:57:32.912770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ebfd0 00:29:43.148 [2024-11-19 10:57:32.913773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.148 [2024-11-19 10:57:32.913792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:43.148 [2024-11-19 10:57:32.921663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc998 00:29:43.148 [2024-11-19 10:57:32.922656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.148 [2024-11-19 10:57:32.922674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.148 [2024-11-19 10:57:32.930474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc998 00:29:43.148 [2024-11-19 10:57:32.931588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.148 [2024-11-19 10:57:32.931607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:32.939635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc998 00:29:43.407 [2024-11-19 10:57:32.940757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:32.940778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:32.948651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc998 00:29:43.407 [2024-11-19 10:57:32.949740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:32.949759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:32.957607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc998 00:29:43.407 [2024-11-19 10:57:32.958704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:32.958723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:32.966588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc998 00:29:43.407 [2024-11-19 10:57:32.967682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:32.967701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:32.974919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eee38 00:29:43.407 [2024-11-19 10:57:32.975953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:32.975972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:32.983896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc128 00:29:43.407 [2024-11-19 10:57:32.984877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:32.984896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:32.993552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc128 00:29:43.407 [2024-11-19 10:57:32.994613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:32.994632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:33.002500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc128 00:29:43.407 [2024-11-19 10:57:33.003573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:33.003592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:33.011454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc128 00:29:43.407 [2024-11-19 10:57:33.012516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:33.012535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:33.020391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc128 00:29:43.407 [2024-11-19 10:57:33.021458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:33.021479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:33.030490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc128 00:29:43.407 [2024-11-19 10:57:33.031918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:33.031936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:33.038227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ed0b0 00:29:43.407 [2024-11-19 10:57:33.039293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:33.039311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:33.047165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ed0b0 00:29:43.407 [2024-11-19 10:57:33.048256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.407 [2024-11-19 10:57:33.048274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.407 [2024-11-19 10:57:33.056098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ed0b0 00:29:43.408 [2024-11-19 10:57:33.057220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.057238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.065100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ed0b0 00:29:43.408 [2024-11-19 10:57:33.066215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.066236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.074294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ed0b0 00:29:43.408 [2024-11-19 10:57:33.075291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.075310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.084442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e0630 00:29:43.408 [2024-11-19 10:57:33.085924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.085943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.091694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fb048 00:29:43.408 [2024-11-19 10:57:33.092686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.092704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.102085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f57b0 00:29:43.408 [2024-11-19 10:57:33.103381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.103399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.109607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6738 00:29:43.408 [2024-11-19 10:57:33.110427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.110446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.118610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eaef0 00:29:43.408 [2024-11-19 10:57:33.119357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.119382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.127936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fe720 00:29:43.408 [2024-11-19 10:57:33.128892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.128910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.136895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166feb58 00:29:43.408 [2024-11-19 10:57:33.137925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.137943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.145228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f0350 00:29:43.408 [2024-11-19 10:57:33.145927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.145946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.154360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f7538 00:29:43.408 [2024-11-19 10:57:33.154860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.154879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.165720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e2c28 00:29:43.408 [2024-11-19 10:57:33.167225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.167243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.172851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f0788 00:29:43.408 [2024-11-19 10:57:33.173816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.173834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.182453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e5a90 00:29:43.408 [2024-11-19 10:57:33.183530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.183548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:43.408 [2024-11-19 10:57:33.191460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e5a90 00:29:43.408 [2024-11-19 10:57:33.192554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.408 [2024-11-19 10:57:33.192574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:43.667 [2024-11-19 10:57:33.200956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166feb58 00:29:43.667 [2024-11-19 10:57:33.202072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.667 [2024-11-19 10:57:33.202092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.667 [2024-11-19 10:57:33.208560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fb480 00:29:43.667 [2024-11-19 10:57:33.209044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.667 [2024-11-19 10:57:33.209064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:43.667 [2024-11-19 10:57:33.217662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e0ea0 00:29:43.667 [2024-11-19 10:57:33.218378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.218396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.227843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f8a50 00:29:43.668 [2024-11-19 10:57:33.229092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.229111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.236962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f7100 00:29:43.668 [2024-11-19 10:57:33.238128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.238146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.245577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fbcf0 00:29:43.668 [2024-11-19 10:57:33.246560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.246579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.255627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fbcf0 00:29:43.668 [2024-11-19 10:57:33.257139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.257156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.262873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e3498 00:29:43.668 [2024-11-19 10:57:33.263797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.263815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.272240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc128 00:29:43.668 [2024-11-19 10:57:33.273344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.273362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.280571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e4de8 00:29:43.668 [2024-11-19 10:57:33.281327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.281345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.289665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fd208 00:29:43.668 [2024-11-19 10:57:33.290265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.290284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.299038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f31b8 00:29:43.668 [2024-11-19 10:57:33.299738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.299756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.308016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eaab8 00:29:43.668 [2024-11-19 10:57:33.308934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.308952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.317389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f57b0 00:29:43.668 [2024-11-19 10:57:33.318466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.318484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.325006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f4f40 00:29:43.668 [2024-11-19 10:57:33.325601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.325620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.335061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6300 00:29:43.668 [2024-11-19 10:57:33.336130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.336148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.344212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166de038 00:29:43.668 [2024-11-19 10:57:33.344907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.344926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.353012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ea248 00:29:43.668 [2024-11-19 10:57:33.353962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.353983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.361678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6300 00:29:43.668 [2024-11-19 10:57:33.362369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.362388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.370496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e4578 00:29:43.668 [2024-11-19 10:57:33.371171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.371190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.379156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:43.668 [2024-11-19 10:57:33.379954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.379975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.388844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fa7d8 00:29:43.668 [2024-11-19 10:57:33.389539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.389558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.397047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f7da8 00:29:43.668 [2024-11-19 10:57:33.397808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.397825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.406211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e5a90 00:29:43.668 [2024-11-19 10:57:33.406903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.406921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.414990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fa3a0 00:29:43.668 [2024-11-19 10:57:33.415637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.415654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.424570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e38d0 00:29:43.668 [2024-11-19 10:57:33.425214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.425232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.434523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:43.668 [2024-11-19 10:57:33.435628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.435646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.443015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e3d08 00:29:43.668 [2024-11-19 10:57:33.443871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.443889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:43.668 [2024-11-19 10:57:33.451610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e5a90 00:29:43.668 [2024-11-19 10:57:33.452285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.668 [2024-11-19 10:57:33.452304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.460919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:43.928 [2024-11-19 10:57:33.461571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.461592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.470936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:43.928 [2024-11-19 10:57:33.472058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.472076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.479949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fa3a0 00:29:43.928 [2024-11-19 10:57:33.480820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.480839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.489034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e01f8 00:29:43.928 [2024-11-19 10:57:33.490097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.490116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.498085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f81e0 00:29:43.928 [2024-11-19 10:57:33.499066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.499085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.508182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f81e0 00:29:43.928 [2024-11-19 10:57:33.509614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.509631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.515296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f3e60 00:29:43.928 [2024-11-19 10:57:33.516257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.516275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.524664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e4de8 00:29:43.928 [2024-11-19 10:57:33.525797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.525814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.534073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fb8b8 00:29:43.928 [2024-11-19 10:57:33.535277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.535295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.542567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166edd58 00:29:43.928 [2024-11-19 10:57:33.543501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.543519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.552297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc560 00:29:43.928 [2024-11-19 10:57:33.553495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.553512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.559419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fd640 00:29:43.928 [2024-11-19 10:57:33.560168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.560186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.571157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e7818 00:29:43.928 [2024-11-19 10:57:33.572708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.572727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.577647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166de8a8 00:29:43.928 [2024-11-19 10:57:33.578281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.578299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.586712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e9168 00:29:43.928 [2024-11-19 10:57:33.587352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.587374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.597213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eee38 00:29:43.928 [2024-11-19 10:57:33.597974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.597993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.606035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e1b48 00:29:43.928 [2024-11-19 10:57:33.606998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.607017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.616727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ecc78 00:29:43.928 [2024-11-19 10:57:33.618214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.928 [2024-11-19 10:57:33.618231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.928 [2024-11-19 10:57:33.623034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e5ec8 00:29:43.928 [2024-11-19 10:57:33.623649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.929 [2024-11-19 10:57:33.623667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:43.929 [2024-11-19 10:57:33.632620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e27f0 00:29:43.929 [2024-11-19 10:57:33.633535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.929 [2024-11-19 10:57:33.633554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:43.929 [2024-11-19 10:57:33.642016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eb760 00:29:43.929 [2024-11-19 10:57:33.643057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.929 [2024-11-19 10:57:33.643078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.929 [2024-11-19 10:57:33.651387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e9e10 00:29:43.929 [2024-11-19 10:57:33.652543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.929 [2024-11-19 10:57:33.652562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:43.929 [2024-11-19 10:57:33.660767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e38d0 00:29:43.929 [2024-11-19 10:57:33.662079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.929 [2024-11-19 10:57:33.662097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:43.929 [2024-11-19 10:57:33.669057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eee38 00:29:43.929 [2024-11-19 10:57:33.670361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.929 [2024-11-19 10:57:33.670379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.929 [2024-11-19 10:57:33.676763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eb760 00:29:43.929 [2024-11-19 10:57:33.677483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.929 [2024-11-19 10:57:33.677501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:43.929 [2024-11-19 10:57:33.686200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e0a68 00:29:43.929 [2024-11-19 10:57:33.687022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.929 [2024-11-19 10:57:33.687040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:43.929 [2024-11-19 10:57:33.698016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6738 00:29:43.929 [2024-11-19 10:57:33.699533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.929 [2024-11-19 10:57:33.699551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:43.929 [2024-11-19 10:57:33.704626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e9e10 00:29:43.929 [2024-11-19 10:57:33.705445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.929 [2024-11-19 10:57:33.705463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:43.929 [2024-11-19 10:57:33.714068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f6890 00:29:43.929 [2024-11-19 10:57:33.715065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.929 [2024-11-19 10:57:33.715084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.723424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e27f0 00:29:44.192 [2024-11-19 10:57:33.723949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.723968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.732145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166dfdc0 00:29:44.192 [2024-11-19 10:57:33.732941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.732960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.740728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f3e60 00:29:44.192 [2024-11-19 10:57:33.741455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.741473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.750106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6738 00:29:44.192 [2024-11-19 10:57:33.750959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.750978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.760222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eee38 00:29:44.192 [2024-11-19 10:57:33.761355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.761374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.769504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e0a68 00:29:44.192 [2024-11-19 10:57:33.770168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.770187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.777963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e27f0 00:29:44.192 [2024-11-19 10:57:33.779171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.779189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.785680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f6020 00:29:44.192 [2024-11-19 10:57:33.786297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.786314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.795080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f2d80 00:29:44.192 [2024-11-19 10:57:33.795816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.795834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:44.192 28106.00 IOPS, 109.79 MiB/s [2024-11-19T09:57:33.984Z] [2024-11-19 10:57:33.804936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ea248 00:29:44.192 [2024-11-19 10:57:33.805790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.805809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.814348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e73e0 00:29:44.192 [2024-11-19 10:57:33.815251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.815269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.823742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eb328 00:29:44.192 [2024-11-19 10:57:33.824799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.824821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.833653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f2510 00:29:44.192 [2024-11-19 10:57:33.835022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.835050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.842885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ea248 00:29:44.192 [2024-11-19 10:57:33.844268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.844301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.849591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eee38 00:29:44.192 [2024-11-19 10:57:33.850374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.850394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.859166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f7970 00:29:44.192 [2024-11-19 10:57:33.860092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.860110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.868408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fa7d8 00:29:44.192 [2024-11-19 10:57:33.868849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.868868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.879120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f7538 00:29:44.192 [2024-11-19 10:57:33.880462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.880481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.888487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fd640 00:29:44.192 [2024-11-19 10:57:33.889949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.889966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.895034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fdeb0 00:29:44.192 [2024-11-19 10:57:33.895803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.895820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.905777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ef270 00:29:44.192 [2024-11-19 10:57:33.906999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.907017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.914860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166feb58 00:29:44.192 [2024-11-19 10:57:33.916075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.916092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.922949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f57b0 00:29:44.192 [2024-11-19 10:57:33.923875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.923893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.931925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f6890 00:29:44.192 [2024-11-19 10:57:33.932864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.932883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.940799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e73e0 00:29:44.192 [2024-11-19 10:57:33.941379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.192 [2024-11-19 10:57:33.941398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:44.192 [2024-11-19 10:57:33.950235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f6458 00:29:44.192 [2024-11-19 10:57:33.950909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.193 [2024-11-19 10:57:33.950928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:44.193 [2024-11-19 10:57:33.958686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166dece0 00:29:44.193 [2024-11-19 10:57:33.959895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.193 [2024-11-19 10:57:33.959913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:44.193 [2024-11-19 10:57:33.966437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f4b08 00:29:44.193 [2024-11-19 10:57:33.967097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.193 [2024-11-19 10:57:33.967115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:44.505 [2024-11-19 10:57:33.977516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e9168 00:29:44.505 [2024-11-19 10:57:33.978818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:33.978841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:33.987957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eaef0 00:29:44.506 [2024-11-19 10:57:33.989229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:33.989250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:33.996641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ff3c8 00:29:44.506 [2024-11-19 10:57:33.997487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:33.997508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.005069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e0a68 00:29:44.506 [2024-11-19 10:57:34.005994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.006013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.014239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e84c0 00:29:44.506 [2024-11-19 10:57:34.014724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.014743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.025654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f9b30 00:29:44.506 [2024-11-19 10:57:34.027130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.027148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.031993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e1710 00:29:44.506 [2024-11-19 10:57:34.032596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.032614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.041371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fdeb0 00:29:44.506 [2024-11-19 10:57:34.042214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.042233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.050134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166dfdc0 00:29:44.506 [2024-11-19 10:57:34.050616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.050634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.059471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f5378 00:29:44.506 [2024-11-19 10:57:34.060389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.060411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.068576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fcdd0 00:29:44.506 [2024-11-19 10:57:34.069050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.069068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.077543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e9168 00:29:44.506 [2024-11-19 10:57:34.078295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.078314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.086576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6fa8 00:29:44.506 [2024-11-19 10:57:34.087091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.087110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.095751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fc998 00:29:44.506 [2024-11-19 10:57:34.096570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.096589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.104703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ec840 00:29:44.506 [2024-11-19 10:57:34.105165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.105183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.115997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e8d30 00:29:44.506 [2024-11-19 10:57:34.117471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.117489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.122317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fdeb0 00:29:44.506 [2024-11-19 10:57:34.122968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.122986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.131895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e88f8 00:29:44.506 [2024-11-19 10:57:34.132784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.132803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.141275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fb048 00:29:44.506 [2024-11-19 10:57:34.142287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.142308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.151980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eee38 00:29:44.506 [2024-11-19 10:57:34.153410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.153427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.160903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166dfdc0 00:29:44.506 [2024-11-19 10:57:34.162349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.162367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.167443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e5220 00:29:44.506 [2024-11-19 10:57:34.168144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.168162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.178419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e5220 00:29:44.506 [2024-11-19 10:57:34.179629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.179647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.186234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fbcf0 00:29:44.506 [2024-11-19 10:57:34.186925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.186943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.195441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fdeb0 00:29:44.506 [2024-11-19 10:57:34.196135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.196153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.204627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f5378 00:29:44.506 [2024-11-19 10:57:34.205198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.506 [2024-11-19 10:57:34.205223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:44.506 [2024-11-19 10:57:34.213380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166dece0 00:29:44.507 [2024-11-19 10:57:34.214149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.507 [2024-11-19 10:57:34.214168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:44.507 [2024-11-19 10:57:34.222466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fb048 00:29:44.507 [2024-11-19 10:57:34.223216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.507 [2024-11-19 10:57:34.223235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:44.507 [2024-11-19 10:57:34.231400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fa3a0 00:29:44.507 [2024-11-19 10:57:34.232256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.507 [2024-11-19 10:57:34.232274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:44.507 [2024-11-19 10:57:34.242125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f4298 00:29:44.507 [2024-11-19 10:57:34.243456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.507 [2024-11-19 10:57:34.243475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:44.507 [2024-11-19 10:57:34.248707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e8088 00:29:44.507 [2024-11-19 10:57:34.249304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.507 [2024-11-19 10:57:34.249326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:44.507 [2024-11-19 10:57:34.258349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e01f8 00:29:44.507 [2024-11-19 10:57:34.259140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.507 [2024-11-19 10:57:34.259161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:44.784 [2024-11-19 10:57:34.269564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e3498 00:29:44.784 [2024-11-19 10:57:34.270733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.784 [2024-11-19 10:57:34.270753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:44.784 [2024-11-19 10:57:34.277211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e84c0 00:29:44.784 [2024-11-19 10:57:34.277697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.784 [2024-11-19 10:57:34.277716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:44.784 [2024-11-19 10:57:34.287697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166dfdc0 00:29:44.784 [2024-11-19 10:57:34.288750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.784 [2024-11-19 10:57:34.288769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.784 [2024-11-19 10:57:34.295091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eb328 00:29:44.784 [2024-11-19 10:57:34.295677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.784 [2024-11-19 10:57:34.295695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:44.784 [2024-11-19 10:57:34.306463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e23b8 00:29:44.784 [2024-11-19 10:57:34.307861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.784 [2024-11-19 10:57:34.307885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:44.784 [2024-11-19 10:57:34.313275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f9b30 00:29:44.784 [2024-11-19 10:57:34.313950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.784 [2024-11-19 10:57:34.313970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:44.784 [2024-11-19 10:57:34.324341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f0ff8 00:29:44.784 [2024-11-19 10:57:34.325509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.784 [2024-11-19 10:57:34.325530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:44.784 [2024-11-19 10:57:34.333883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e7c50 00:29:44.784 [2024-11-19 10:57:34.335191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.784 [2024-11-19 10:57:34.335228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.784 [2024-11-19 10:57:34.342566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f1868 00:29:44.784 [2024-11-19 10:57:34.343519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.784 [2024-11-19 10:57:34.343538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.784 [2024-11-19 10:57:34.353293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eb328 00:29:44.784 [2024-11-19 10:57:34.354773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.784 [2024-11-19 10:57:34.354791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.784 [2024-11-19 10:57:34.359854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e23b8 00:29:44.784 [2024-11-19 10:57:34.360609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.784 [2024-11-19 10:57:34.360628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.368879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:44.785 [2024-11-19 10:57:34.369676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.369694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.378280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f57b0 00:29:44.785 [2024-11-19 10:57:34.379205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.379227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.387480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eff18 00:29:44.785 [2024-11-19 10:57:34.388394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.388413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.396009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e73e0 00:29:44.785 [2024-11-19 10:57:34.396827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.396846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.405239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ebb98 00:29:44.785 [2024-11-19 10:57:34.405712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.405731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.414196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e3d08 00:29:44.785 [2024-11-19 10:57:34.414941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.414960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.423297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f4b08 00:29:44.785 [2024-11-19 10:57:34.424005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.424023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.433877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f0ff8 00:29:44.785 [2024-11-19 10:57:34.435231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.435249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.440290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166fb048 00:29:44.785 [2024-11-19 10:57:34.440951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.440970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.450851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e7818 00:29:44.785 [2024-11-19 10:57:34.451762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.451780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.461319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e5a90 00:29:44.785 [2024-11-19 10:57:34.462662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.462680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.470361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f2948 00:29:44.785 [2024-11-19 10:57:34.471691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.471710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.476948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e3498 00:29:44.785 [2024-11-19 10:57:34.477567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.477586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.488647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eea00 00:29:44.785 [2024-11-19 10:57:34.489954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.489972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.495780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e1f80 00:29:44.785 [2024-11-19 10:57:34.496623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.496641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.504718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166de8a8 00:29:44.785 [2024-11-19 10:57:34.505342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.505361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.515746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166eea00 00:29:44.785 [2024-11-19 10:57:34.517166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.517185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.522211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e8088 00:29:44.785 [2024-11-19 10:57:34.522913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.522930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.532570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e4140 00:29:44.785 [2024-11-19 10:57:34.533430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.533449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.541169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f7100 00:29:44.785 [2024-11-19 10:57:34.542018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.542036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.550781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e95a0 00:29:44.785 [2024-11-19 10:57:34.551627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.551645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:44.785 [2024-11-19 10:57:34.559761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e95a0 00:29:44.785 [2024-11-19 10:57:34.560752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.785 [2024-11-19 10:57:34.560775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:45.059 [2024-11-19 10:57:34.568938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e95a0 00:29:45.059 [2024-11-19 10:57:34.569915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.059 [2024-11-19 10:57:34.569937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:45.059 [2024-11-19 10:57:34.578012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e95a0 00:29:45.059 [2024-11-19 10:57:34.578989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.059 [2024-11-19 10:57:34.579013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:45.059 [2024-11-19 10:57:34.587220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e95a0 00:29:45.059 [2024-11-19 10:57:34.588176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.059 [2024-11-19 10:57:34.588199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:45.059 [2024-11-19 10:57:34.596712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e8088 00:29:45.059 [2024-11-19 10:57:34.597777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.059 [2024-11-19 10:57:34.597798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:45.059 [2024-11-19 10:57:34.605402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6738 00:29:45.059 [2024-11-19 10:57:34.606471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.059 [2024-11-19 10:57:34.606490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:45.059 [2024-11-19 10:57:34.614881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f31b8 00:29:45.059 [2024-11-19 10:57:34.616014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.059 [2024-11-19 10:57:34.616035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:45.059 [2024-11-19 10:57:34.623195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f3e60 00:29:45.059 [2024-11-19 10:57:34.624034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.059 [2024-11-19 10:57:34.624052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:45.059 [2024-11-19 10:57:34.632271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e1b48 00:29:45.060 [2024-11-19 10:57:34.632870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.632888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.641629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e38d0 00:29:45.060 [2024-11-19 10:57:34.642461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.642479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.651944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ef270 00:29:45.060 [2024-11-19 10:57:34.653434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.653452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.658263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f1430 00:29:45.060 [2024-11-19 10:57:34.658944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.658962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.667409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ecc78 00:29:45.060 [2024-11-19 10:57:34.668121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.668140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.676378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166ef6a8 00:29:45.060 [2024-11-19 10:57:34.677088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.677107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.685364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f6cc8 00:29:45.060 [2024-11-19 10:57:34.685986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.686005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.694281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166f6890 00:29:45.060 [2024-11-19 10:57:34.694887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.694905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.703151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.703902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.703921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.712087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.712803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.712821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.721012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.721717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.721736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.730173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.730889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.730909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.739156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.739868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.739887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.748136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.748819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.748837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.757064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.757765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.757784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.765990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.766705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.766723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.774902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.775595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.775614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.783829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.784516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.784535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.792874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.793593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.793612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 [2024-11-19 10:57:34.802091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dab640) with pdu=0x2000166e6b70 00:29:45.060 [2024-11-19 10:57:34.802809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.060 [2024-11-19 10:57:34.802828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:45.060 28145.50 IOPS, 109.94 MiB/s 00:29:45.060 Latency(us) 00:29:45.060 [2024-11-19T09:57:34.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.060 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.060 nvme0n1 : 2.00 28163.89 110.02 0.00 0.00 4539.16 1778.83 12420.63 00:29:45.060 [2024-11-19T09:57:34.852Z] =================================================================================================================== 00:29:45.060 [2024-11-19T09:57:34.852Z] Total : 28163.89 110.02 0.00 0.00 4539.16 1778.83 12420.63 00:29:45.060 { 00:29:45.060 "results": [ 00:29:45.060 { 00:29:45.060 "job": "nvme0n1", 00:29:45.060 "core_mask": "0x2", 00:29:45.060 "workload": "randwrite", 00:29:45.060 "status": "finished", 00:29:45.060 "queue_depth": 128, 00:29:45.060 "io_size": 4096, 00:29:45.060 "runtime": 2.004908, 00:29:45.060 "iops": 28163.885824187444, 00:29:45.060 "mibps": 110.0151790007322, 00:29:45.060 "io_failed": 0, 00:29:45.060 "io_timeout": 0, 00:29:45.060 "avg_latency_us": 4539.159409876655, 00:29:45.060 "min_latency_us": 1778.8342857142857, 00:29:45.060 "max_latency_us": 12420.63238095238 00:29:45.060 } 00:29:45.060 ], 00:29:45.060 "core_count": 1 00:29:45.060 } 00:29:45.060 10:57:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:45.060 10:57:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:45.060 10:57:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:45.060 | .driver_specific 00:29:45.060 | .nvme_error 00:29:45.060 | .status_code 00:29:45.060 | .command_transient_transport_error' 00:29:45.060 10:57:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4080011 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4080011 ']' 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4080011 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4080011 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4080011' 00:29:45.320 killing process with pid 4080011 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4080011 00:29:45.320 Received shutdown signal, test time was about 2.000000 seconds 00:29:45.320 00:29:45.320 Latency(us) 00:29:45.320 [2024-11-19T09:57:35.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.320 [2024-11-19T09:57:35.112Z] =================================================================================================================== 00:29:45.320 [2024-11-19T09:57:35.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:45.320 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4080011 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4080488 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4080488 /var/tmp/bperf.sock 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4080488 ']' 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:45.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.579 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:45.579 [2024-11-19 10:57:35.280027] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:45.579 [2024-11-19 10:57:35.280077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080488 ] 00:29:45.579 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:45.579 Zero copy mechanism will not be used. 00:29:45.579 [2024-11-19 10:57:35.353440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.837 [2024-11-19 10:57:35.391162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.837 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.837 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:45.837 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:45.837 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:46.096 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:46.096 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.096 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.096 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.096 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:46.096 10:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:46.354 nvme0n1 00:29:46.355 10:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:46.355 10:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.355 10:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.355 10:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.355 10:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:46.355 10:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:46.355 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:46.355 Zero copy mechanism will not be used. 00:29:46.355 Running I/O for 2 seconds... 00:29:46.355 [2024-11-19 10:57:36.131478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.355 [2024-11-19 10:57:36.131579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.355 [2024-11-19 10:57:36.131610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.355 [2024-11-19 10:57:36.136125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.355 [2024-11-19 10:57:36.136194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.355 [2024-11-19 10:57:36.136223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.355 [2024-11-19 10:57:36.140519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.355 [2024-11-19 10:57:36.140591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.355 [2024-11-19 10:57:36.140615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.144979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.145055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.145081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.149302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.149374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.149395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.153740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.153812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.153831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.158190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.158282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.158301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.162742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.162797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.162816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.166969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.167023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.167041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.171390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.171446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.171464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.175968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.176041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.176060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.180643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.180703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.180721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.185278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.185338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.185356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.189766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.189837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.189856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.194506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.194585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.194604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.199225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.199290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.199308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.203909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.203976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.203995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.208726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.208785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.208803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.213190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.213275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.213293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.217706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.217776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.217795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.222245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.222313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.222332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.226913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.226977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.226996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.231405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.231457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.231475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.235867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.235930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.235947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.240598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.615 [2024-11-19 10:57:36.240650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.615 [2024-11-19 10:57:36.240669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.615 [2024-11-19 10:57:36.245217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.245320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.245338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.249806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.249900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.249918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.254270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.254341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.254360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.258528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.258597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.258615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.262802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.262865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.262889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.267041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.267096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.267114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.271261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.271337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.271355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.275500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.275573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.275592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.279743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.279818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.279836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.283985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.284065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.284083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.288333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.288394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.288412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.292616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.292704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.292723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.296864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.296918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.296936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.301128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.301210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.301230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.305389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.305456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.305475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.309595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.309665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.309683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.313817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.313907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.313925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.318017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.318089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.318107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.322674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.322757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.322776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.327166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.327242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.327261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.331702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.331756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.331774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.336392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.336486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.336504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.342169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.342245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.342265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.347449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.347543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.347562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.353917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.354065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.354083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.360793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.360885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.360904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.366790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.616 [2024-11-19 10:57:36.366884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.616 [2024-11-19 10:57:36.366902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.616 [2024-11-19 10:57:36.372852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.617 [2024-11-19 10:57:36.372935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.617 [2024-11-19 10:57:36.372954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.617 [2024-11-19 10:57:36.378198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.617 [2024-11-19 10:57:36.378308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.617 [2024-11-19 10:57:36.378325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.617 [2024-11-19 10:57:36.382723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.617 [2024-11-19 10:57:36.382841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.617 [2024-11-19 10:57:36.382859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.617 [2024-11-19 10:57:36.387597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.617 [2024-11-19 10:57:36.387675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.617 [2024-11-19 10:57:36.387697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.617 [2024-11-19 10:57:36.392273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.617 [2024-11-19 10:57:36.392328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.617 [2024-11-19 10:57:36.392346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.617 [2024-11-19 10:57:36.396922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.617 [2024-11-19 10:57:36.397022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.617 [2024-11-19 10:57:36.397039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.617 [2024-11-19 10:57:36.401524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.617 [2024-11-19 10:57:36.401649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.617 [2024-11-19 10:57:36.401669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.877 [2024-11-19 10:57:36.406213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.406289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.406309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.410937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.411012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.411032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.415451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.415578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.415597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.420005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.420065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.420083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.424638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.424715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.424733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.429674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.429783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.429801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.435159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.435304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.435322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.439891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.439957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.439975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.444580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.444652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.444670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.450028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.450128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.450146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.455769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.455878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.455896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.460994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.461164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.461182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.466051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.466161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.466179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.471130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.471244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.471262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.475994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.476094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.476113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.480914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.481018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.481036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.486076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.486152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.486171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.490956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.491041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.491059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.495766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.495866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.495884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.500214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.500307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.500325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.504372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.504445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.504463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.508532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.508627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.508645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.513179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.513257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.513279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.517642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.517745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.517763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.523282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.523361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.523379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.529146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.529281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.878 [2024-11-19 10:57:36.529300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.878 [2024-11-19 10:57:36.534216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.878 [2024-11-19 10:57:36.534370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.534388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.538948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.539044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.539062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.544275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.544413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.544430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.549458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.549618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.549638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.554442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.554538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.554556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.559455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.559555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.559573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.564598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.564700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.564718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.569596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.569701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.569719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.574667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.574828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.574845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.579920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.579984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.580002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.584662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.584713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.584731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.589033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.589089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.589107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.593334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.593411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.593429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.597718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.597784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.597802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.602027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.602100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.602118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.606359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.606428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.606456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.610619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.610683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.610700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.614886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.614946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.614963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.619122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.619184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.619207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.623637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.623759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.623777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.628419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.628470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.628488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.633488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.633564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.633583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.638598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.638652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.638675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.643872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.643961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.643980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.649045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.649150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.649168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.654627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.654700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.654718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.659730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.659799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.879 [2024-11-19 10:57:36.659818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.879 [2024-11-19 10:57:36.664806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:46.879 [2024-11-19 10:57:36.664864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.880 [2024-11-19 10:57:36.664884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.669905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.669994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.670015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.675403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.675456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.675476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.680385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.680439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.680457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.685578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.685654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.685673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.690813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.690898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.690916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.695663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.695735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.695754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.700731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.700837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.700855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.705456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.705567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.705585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.710772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.710824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.710843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.716043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.716117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.716136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.721088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.721155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.721174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.725887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.725980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.725999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.730976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.731115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.731132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.736091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.736158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.736178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.741370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.741444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.741463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.746152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.746296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.746314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.750736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.750812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.750830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.755752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.756016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.756035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.760565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.760800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.760820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.765071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.765361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.765381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.769641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.140 [2024-11-19 10:57:36.769909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.140 [2024-11-19 10:57:36.769933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.140 [2024-11-19 10:57:36.774410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.774678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.774697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.779093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.779357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.779375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.784079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.784346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.784365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.789803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.790085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.790104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.794661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.794919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.794938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.799461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.799698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.799717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.804291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.804537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.804555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.809215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.809469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.809488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.814730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.814999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.815018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.819780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.820029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.820047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.824785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.825061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.825080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.829694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.829942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.829960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.834598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.834848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.834866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.839584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.839847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.839866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.845219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.845490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.845509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.849972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.850213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.850232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.855799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.856087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.856106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.861845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.862085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.862104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.866719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.866991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.867011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.871324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.871581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.871599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.875680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.875926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.875944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.879989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.880276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.880294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.884336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.884602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.884621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.888635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.888896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.888915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.892883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.893136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.893155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.897142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.897410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.897433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.901580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.141 [2024-11-19 10:57:36.901846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.141 [2024-11-19 10:57:36.901866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.141 [2024-11-19 10:57:36.906147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.142 [2024-11-19 10:57:36.906432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.142 [2024-11-19 10:57:36.906451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.142 [2024-11-19 10:57:36.911396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.142 [2024-11-19 10:57:36.911666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.142 [2024-11-19 10:57:36.911685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.142 [2024-11-19 10:57:36.916631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.142 [2024-11-19 10:57:36.916896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.142 [2024-11-19 10:57:36.916915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.142 [2024-11-19 10:57:36.921367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.142 [2024-11-19 10:57:36.921623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.142 [2024-11-19 10:57:36.921641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.142 [2024-11-19 10:57:36.925926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.142 [2024-11-19 10:57:36.926193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.142 [2024-11-19 10:57:36.926221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.402 [2024-11-19 10:57:36.930622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.402 [2024-11-19 10:57:36.930869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.402 [2024-11-19 10:57:36.930889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.402 [2024-11-19 10:57:36.935094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.402 [2024-11-19 10:57:36.935361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.402 [2024-11-19 10:57:36.935382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.402 [2024-11-19 10:57:36.939505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.402 [2024-11-19 10:57:36.939760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.402 [2024-11-19 10:57:36.939779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.402 [2024-11-19 10:57:36.943816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.402 [2024-11-19 10:57:36.944065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.402 [2024-11-19 10:57:36.944084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.402 [2024-11-19 10:57:36.948067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.402 [2024-11-19 10:57:36.948333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.402 [2024-11-19 10:57:36.948352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.402 [2024-11-19 10:57:36.952315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.402 [2024-11-19 10:57:36.952569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:36.952588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:36.956498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:36.956738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:36.956757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:36.960726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:36.960989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:36.961007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:36.964994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:36.965268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:36.965287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:36.969257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:36.969510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:36.969529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:36.973493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:36.973704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:36.973723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:36.978018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:36.978266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:36.978285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:36.982387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:36.982635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:36.982654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:36.986893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:36.987152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:36.987171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:36.991359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:36.991633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:36.991651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:36.995736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:36.995998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:36.996017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.000136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.000408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.000428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.004590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.004852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.004871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.009075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.009361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.009380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.013555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.013807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.013829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.018032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.018274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.018293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.022468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.022725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.022744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.026933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.027177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.027197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.031580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.031827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.031846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.036135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.036393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.036412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.040703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.040952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.040971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.045125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.045368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.045399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.049748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.049991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.050010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.054110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.054378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.054398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.058419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.058658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.058678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.062674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.062918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.062938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.066920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.403 [2024-11-19 10:57:37.067160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.403 [2024-11-19 10:57:37.067179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.403 [2024-11-19 10:57:37.071180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.071431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.071452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.075409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.075680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.075699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.079675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.079920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.079939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.083866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.084113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.084132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.088047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.088296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.088315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.092225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.092468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.092487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.096478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.096709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.096728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.100665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.100914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.100933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.104864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.105116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.105136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.109093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.109363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.109382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.113260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.113519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.113539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.117455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.117716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.117735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.121631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.121892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.121912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.126045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.126325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.126350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.130412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.130645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.130664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.404 6588.00 IOPS, 823.50 MiB/s [2024-11-19T09:57:37.196Z] [2024-11-19 10:57:37.135760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.135929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.135947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.139810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.139984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.140004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.143705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.143865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.143885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.147557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.147731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.147750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.151413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.151576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.151595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.155261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.155418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.155436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.159110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.159296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.159314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.162981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.163147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.163165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.166898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.167056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.167074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.170707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.404 [2024-11-19 10:57:37.170891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.404 [2024-11-19 10:57:37.170908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.404 [2024-11-19 10:57:37.174557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.405 [2024-11-19 10:57:37.174722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.405 [2024-11-19 10:57:37.174739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.405 [2024-11-19 10:57:37.178348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.405 [2024-11-19 10:57:37.178516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.405 [2024-11-19 10:57:37.178534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.405 [2024-11-19 10:57:37.182100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.405 [2024-11-19 10:57:37.182266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.405 [2024-11-19 10:57:37.182284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.405 [2024-11-19 10:57:37.185890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.405 [2024-11-19 10:57:37.186064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.405 [2024-11-19 10:57:37.186083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.405 [2024-11-19 10:57:37.189741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.405 [2024-11-19 10:57:37.189923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.405 [2024-11-19 10:57:37.189943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.193808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.193972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.193995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.197624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.197796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.197816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.201403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.201562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.201581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.205117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.205303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.205321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.208895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.209057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.209075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.212671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.212833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.212851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.216460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.216624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.216642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.220234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.220396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.220414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.224373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.224516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.224534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.229056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.229192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.229217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.233448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.233590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.233608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.238253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.238392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.238410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.242693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.242823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.242841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.247250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.247408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.247427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.251956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.252119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.252152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.256438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.256593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.256610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.260864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.261011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.261029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.265220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.265353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.265370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.269946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.270081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.270099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.274452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.274603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.274621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.279039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.279156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.279175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.283442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.283595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.283613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.288035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.288196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.288220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.293409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.293537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.665 [2024-11-19 10:57:37.293557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.665 [2024-11-19 10:57:37.299317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.665 [2024-11-19 10:57:37.299537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.299555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.305731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.305929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.305947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.312389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.312586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.312608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.318966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.319150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.319168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.325745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.325933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.325960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.332736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.332964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.332984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.338852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.339088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.339107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.345266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.345471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.345491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.351151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.351332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.351350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.357130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.357361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.357380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.362800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.363081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.363100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.368233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.368455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.368474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.373548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.373750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.373770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.379046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.379293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.379312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.384412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.384622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.384642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.389200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.389389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.389407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.394674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.394914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.394934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.400580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.400737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.400755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.405242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.405423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.405442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.410085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.410235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.410254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.415070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.415274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.415293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.420655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.420832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.420850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.425652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.425842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.425868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.430475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.430639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.430657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.434425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.434588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.434605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.438250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.438420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.438438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.666 [2024-11-19 10:57:37.442516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.666 [2024-11-19 10:57:37.442683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.666 [2024-11-19 10:57:37.442700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.667 [2024-11-19 10:57:37.447741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.667 [2024-11-19 10:57:37.447898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.667 [2024-11-19 10:57:37.447916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.454039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.454222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.454247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.458507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.458690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.458709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.462676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.462856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.462874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.467052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.467232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.467250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.471212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.471393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.471412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.475279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.475483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.475502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.479316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.479574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.479593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.483142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.483351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.483371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.487210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.487395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.487414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.491023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.491181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.491199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.495502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.495718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.495738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.500788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.501033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.501052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.505337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.505511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.505529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.509541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.509721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.509738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.513770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.513924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.513942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.518034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.518211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.518229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.522052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.522266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.522286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.526080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.927 [2024-11-19 10:57:37.526282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.927 [2024-11-19 10:57:37.526300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.927 [2024-11-19 10:57:37.530354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.530567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.530586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.534705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.534852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.534870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.538981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.539215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.539234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.543509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.543699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.543722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.548153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.548345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.548368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.552208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.552365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.552382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.556086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.556235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.556253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.560610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.560861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.560880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.566277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.566539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.566562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.571481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.571677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.571695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.577447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.577588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.577606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.582620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.582747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.582765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.588035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.588303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.588323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.593789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.593951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.593970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.598438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.598605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.598623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.602664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.602826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.602843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.606835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.606988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.607005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.611179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.611365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.611399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.615261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.615395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.615413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.619181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.619309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.619328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.623239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.623392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.623410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.628418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.628626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.628646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.633480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.633610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.633628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.637608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.637746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.637764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.641813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.642007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.642025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.646146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.646274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.646292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.650503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.928 [2024-11-19 10:57:37.650610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.928 [2024-11-19 10:57:37.650629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.928 [2024-11-19 10:57:37.654573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.654682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.654700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.658683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.658833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.658850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.662761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.662864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.662881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.666924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.667102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.667120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.671705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.671826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.671844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.675976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.676115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.676132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.680256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.680371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.680389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.684401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.684548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.684571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.688508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.688660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.688679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.692651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.692791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.692809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.696687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.696808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.696826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.700680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.700812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.700829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.704785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.704885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.704902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.708976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.709125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.709143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.929 [2024-11-19 10:57:37.713275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:47.929 [2024-11-19 10:57:37.713425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.929 [2024-11-19 10:57:37.713445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.189 [2024-11-19 10:57:37.717506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.189 [2024-11-19 10:57:37.717654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.189 [2024-11-19 10:57:37.717674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.189 [2024-11-19 10:57:37.721677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.189 [2024-11-19 10:57:37.721802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.189 [2024-11-19 10:57:37.721822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.189 [2024-11-19 10:57:37.725790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.189 [2024-11-19 10:57:37.725941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.189 [2024-11-19 10:57:37.725960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.189 [2024-11-19 10:57:37.729933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.189 [2024-11-19 10:57:37.730116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.189 [2024-11-19 10:57:37.730134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.189 [2024-11-19 10:57:37.734553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.189 [2024-11-19 10:57:37.734694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.189 [2024-11-19 10:57:37.734713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.189 [2024-11-19 10:57:37.739341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.189 [2024-11-19 10:57:37.739491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.189 [2024-11-19 10:57:37.739509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.189 [2024-11-19 10:57:37.743773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.189 [2024-11-19 10:57:37.743908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.189 [2024-11-19 10:57:37.743927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.189 [2024-11-19 10:57:37.748306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.189 [2024-11-19 10:57:37.748511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.189 [2024-11-19 10:57:37.748531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.189 [2024-11-19 10:57:37.753430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.189 [2024-11-19 10:57:37.753687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.753707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.757742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.757904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.757922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.761879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.762045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.762063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.766054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.766228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.766248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.770151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.770316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.770334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.774269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.774448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.774466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.778492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.778695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.778714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.782544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.782742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.782762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.786702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.786877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.786894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.790819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.791011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.791031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.795296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.795487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.795508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.800923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.801074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.801093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.805999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.806178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.806197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.811064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.811237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.811256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.816646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.816826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.816844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.822760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.822953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.822972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.828239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.828433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.828452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.833575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.833740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.833758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.839706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.839916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.839935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.845755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.845935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.845957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.850640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.850789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.850807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.854590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.854744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.854761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.858516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.858677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.858695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.862405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.862590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.862608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.866328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.866486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.866505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.870169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.870363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.870391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.874034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.874198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.190 [2024-11-19 10:57:37.874222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.190 [2024-11-19 10:57:37.877870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.190 [2024-11-19 10:57:37.878033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.878051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.881718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.881878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.881896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.885608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.885767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.885786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.889437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.889609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.889627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.893328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.893500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.893518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.897173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.897350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.897369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.901032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.901200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.901225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.904902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.905071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.905090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.908759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.908917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.908936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.912619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.912792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.912814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.916452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.916613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.916632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.920297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.920468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.920486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.924104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.924277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.924295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.928343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.928503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.928522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.932790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.932953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.932972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.937051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.937232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.937250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.941554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.941715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.941733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.946181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.946354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.946372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.950511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.950673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.950694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.954965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.955127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.955145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.959499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.959669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.959687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.964114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.964288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.964306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.968392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.968574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.968592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.972737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.972920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.972940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.191 [2024-11-19 10:57:37.977145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.191 [2024-11-19 10:57:37.977321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.191 [2024-11-19 10:57:37.977341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:37.981982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:37.982160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:37.982180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:37.986760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:37.986908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:37.986928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:37.991280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:37.991462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:37.991482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:37.995629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:37.995797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:37.995816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:38.000004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:38.000170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:38.000188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:38.004631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:38.004793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:38.004811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:38.009494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:38.009652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:38.009670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:38.013715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:38.013876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:38.013894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:38.018035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:38.018191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:38.018215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:38.022432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:38.022597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:38.022615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:38.027085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:38.027239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:38.027257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.451 [2024-11-19 10:57:38.031535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.451 [2024-11-19 10:57:38.031701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.451 [2024-11-19 10:57:38.031720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.035549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.035719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.035738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.039460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.039647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.039666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.043481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.043649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.043667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.047466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.047630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.047648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.051390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.051543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.051561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.055428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.055605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.055623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.059405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.059566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.059583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.063425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.063575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.063597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.067318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.067485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.067503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.071461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.071618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.071636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.075427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.075591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.075609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.079364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.079518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.079536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.083288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.083474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.083492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.087316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.087497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.087515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.091308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.091495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.091513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.095310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.095490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.095508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.099158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.099338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.099356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.103751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.103916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.103934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.108100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.108271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.108289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.112151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.112325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.112345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.115937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.116090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.116108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.119888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.120036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.120055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.123710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.123872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.123890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.127491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.127646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.127664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.452 [2024-11-19 10:57:38.131905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.132135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.132154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.452 6771.00 IOPS, 846.38 MiB/s [2024-11-19T09:57:38.244Z] [2024-11-19 10:57:38.136787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dabb20) with pdu=0x2000166ff3c8 00:29:48.452 [2024-11-19 10:57:38.136909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-19 10:57:38.136927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.452 00:29:48.452 Latency(us) 00:29:48.452 [2024-11-19T09:57:38.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.452 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:48.453 nvme0n1 : 2.00 6769.25 846.16 0.00 0.00 2359.58 1341.93 7302.58 00:29:48.453 [2024-11-19T09:57:38.245Z] =================================================================================================================== 00:29:48.453 [2024-11-19T09:57:38.245Z] Total : 6769.25 846.16 0.00 0.00 2359.58 1341.93 7302.58 00:29:48.453 { 00:29:48.453 "results": [ 00:29:48.453 { 00:29:48.453 "job": "nvme0n1", 00:29:48.453 "core_mask": "0x2", 00:29:48.453 "workload": "randwrite", 00:29:48.453 "status": "finished", 00:29:48.453 "queue_depth": 16, 00:29:48.453 "io_size": 131072, 00:29:48.453 "runtime": 2.002733, 00:29:48.453 "iops": 6769.2498201208045, 00:29:48.453 "mibps": 846.1562275151006, 00:29:48.453 "io_failed": 0, 00:29:48.453 "io_timeout": 0, 00:29:48.453 "avg_latency_us": 2359.5801189334625, 00:29:48.453 "min_latency_us": 1341.9276190476191, 00:29:48.453 "max_latency_us": 7302.582857142857 00:29:48.453 } 00:29:48.453 ], 00:29:48.453 "core_count": 1 00:29:48.453 } 00:29:48.453 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:48.453 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:48.453 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:48.453 | .driver_specific 00:29:48.453 | .nvme_error 00:29:48.453 | .status_code 00:29:48.453 | .command_transient_transport_error' 00:29:48.453 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:48.711 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 438 > 0 )) 00:29:48.711 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4080488 00:29:48.711 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4080488 ']' 00:29:48.711 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4080488 00:29:48.711 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:48.711 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.711 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4080488 00:29:48.711 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:48.711 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:48.711 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4080488' 00:29:48.711 killing process with pid 4080488 00:29:48.711 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4080488 00:29:48.711 Received shutdown signal, test time was about 2.000000 seconds 00:29:48.711 00:29:48.711 Latency(us) 00:29:48.711 [2024-11-19T09:57:38.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.711 [2024-11-19T09:57:38.504Z] =================================================================================================================== 00:29:48.712 [2024-11-19T09:57:38.504Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.712 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4080488 00:29:48.970 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4078829 00:29:48.970 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4078829 ']' 00:29:48.970 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4078829 00:29:48.970 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:48.970 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.970 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078829 00:29:48.970 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.970 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.970 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078829' 00:29:48.970 killing process with pid 4078829 00:29:48.970 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4078829 00:29:48.970 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4078829 00:29:49.229 00:29:49.229 real 0m13.847s 00:29:49.229 user 0m26.396s 00:29:49.229 sys 0m4.630s 00:29:49.229 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.229 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.229 ************************************ 00:29:49.229 END TEST nvmf_digest_error 00:29:49.230 ************************************ 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.230 rmmod nvme_tcp 00:29:49.230 rmmod nvme_fabrics 00:29:49.230 rmmod nvme_keyring 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 4078829 ']' 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 4078829 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 4078829 ']' 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 4078829 00:29:49.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4078829) - No such process 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 4078829 is not found' 00:29:49.230 Process with pid 4078829 is not found 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.230 10:57:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.766 10:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.766 00:29:51.766 real 0m36.877s 00:29:51.766 user 0m55.598s 00:29:51.766 sys 0m13.831s 00:29:51.766 10:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.766 10:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:51.766 ************************************ 00:29:51.766 END TEST nvmf_digest 00:29:51.766 ************************************ 00:29:51.766 10:57:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:51.766 10:57:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:51.766 10:57:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:51.766 10:57:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:51.766 10:57:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:51.766 10:57:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.766 10:57:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.766 ************************************ 00:29:51.766 START TEST nvmf_bdevperf 00:29:51.766 ************************************ 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:51.766 * Looking for test storage... 00:29:51.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:51.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.766 --rc genhtml_branch_coverage=1 00:29:51.766 --rc genhtml_function_coverage=1 00:29:51.766 --rc genhtml_legend=1 00:29:51.766 --rc geninfo_all_blocks=1 00:29:51.766 --rc geninfo_unexecuted_blocks=1 00:29:51.766 00:29:51.766 ' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:51.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.766 --rc genhtml_branch_coverage=1 00:29:51.766 --rc genhtml_function_coverage=1 00:29:51.766 --rc genhtml_legend=1 00:29:51.766 --rc geninfo_all_blocks=1 00:29:51.766 --rc geninfo_unexecuted_blocks=1 00:29:51.766 00:29:51.766 ' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:51.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.766 --rc genhtml_branch_coverage=1 00:29:51.766 --rc genhtml_function_coverage=1 00:29:51.766 --rc genhtml_legend=1 00:29:51.766 --rc geninfo_all_blocks=1 00:29:51.766 --rc geninfo_unexecuted_blocks=1 00:29:51.766 00:29:51.766 ' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:51.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.766 --rc genhtml_branch_coverage=1 00:29:51.766 --rc genhtml_function_coverage=1 00:29:51.766 --rc genhtml_legend=1 00:29:51.766 --rc geninfo_all_blocks=1 00:29:51.766 --rc geninfo_unexecuted_blocks=1 00:29:51.766 00:29:51.766 ' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:51.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:51.766 10:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:58.336 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:58.336 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:58.336 Found net devices under 0000:86:00.0: cvl_0_0 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:58.336 Found net devices under 0000:86:00.1: cvl_0_1 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.336 10:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.336 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.336 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.336 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:58.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:29:58.337 00:29:58.337 --- 10.0.0.2 ping statistics --- 00:29:58.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.337 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:29:58.337 00:29:58.337 --- 10.0.0.1 ping statistics --- 00:29:58.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.337 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4084526 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4084526 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 4084526 ']' 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.337 [2024-11-19 10:57:47.270901] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:58.337 [2024-11-19 10:57:47.270948] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.337 [2024-11-19 10:57:47.334240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:58.337 [2024-11-19 10:57:47.377773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.337 [2024-11-19 10:57:47.377808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.337 [2024-11-19 10:57:47.377816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.337 [2024-11-19 10:57:47.377822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.337 [2024-11-19 10:57:47.377828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.337 [2024-11-19 10:57:47.379110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.337 [2024-11-19 10:57:47.379227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.337 [2024-11-19 10:57:47.379228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.337 [2024-11-19 10:57:47.526299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.337 Malloc0 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.337 [2024-11-19 10:57:47.588485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:58.337 { 00:29:58.337 "params": { 00:29:58.337 "name": "Nvme$subsystem", 00:29:58.337 "trtype": "$TEST_TRANSPORT", 00:29:58.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.337 "adrfam": "ipv4", 00:29:58.337 "trsvcid": "$NVMF_PORT", 00:29:58.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.337 "hdgst": ${hdgst:-false}, 00:29:58.337 "ddgst": ${ddgst:-false} 00:29:58.337 }, 00:29:58.337 "method": "bdev_nvme_attach_controller" 00:29:58.337 } 00:29:58.337 EOF 00:29:58.337 )") 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:58.337 10:57:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:58.337 "params": { 00:29:58.337 "name": "Nvme1", 00:29:58.337 "trtype": "tcp", 00:29:58.337 "traddr": "10.0.0.2", 00:29:58.337 "adrfam": "ipv4", 00:29:58.337 "trsvcid": "4420", 00:29:58.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.337 "hdgst": false, 00:29:58.337 "ddgst": false 00:29:58.337 }, 00:29:58.337 "method": "bdev_nvme_attach_controller" 00:29:58.337 }' 00:29:58.337 [2024-11-19 10:57:47.640566] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:58.337 [2024-11-19 10:57:47.640617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084739 ] 00:29:58.337 [2024-11-19 10:57:47.717387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.337 [2024-11-19 10:57:47.758320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.337 Running I/O for 1 seconds... 00:29:59.270 11372.00 IOPS, 44.42 MiB/s 00:29:59.270 Latency(us) 00:29:59.270 [2024-11-19T09:57:49.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.270 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:59.270 Verification LBA range: start 0x0 length 0x4000 00:29:59.270 Nvme1n1 : 1.01 11370.14 44.41 0.00 0.00 11217.61 2387.38 12545.46 00:29:59.270 [2024-11-19T09:57:49.062Z] =================================================================================================================== 00:29:59.270 [2024-11-19T09:57:49.062Z] Total : 11370.14 44.41 0.00 0.00 11217.61 2387.38 12545.46 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4084977 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:59.529 { 00:29:59.529 "params": { 00:29:59.529 "name": "Nvme$subsystem", 00:29:59.529 "trtype": "$TEST_TRANSPORT", 00:29:59.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.529 "adrfam": "ipv4", 00:29:59.529 "trsvcid": "$NVMF_PORT", 00:29:59.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.529 "hdgst": ${hdgst:-false}, 00:29:59.529 "ddgst": ${ddgst:-false} 00:29:59.529 }, 00:29:59.529 "method": "bdev_nvme_attach_controller" 00:29:59.529 } 00:29:59.529 EOF 00:29:59.529 )") 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:59.529 10:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:59.529 "params": { 00:29:59.529 "name": "Nvme1", 00:29:59.529 "trtype": "tcp", 00:29:59.529 "traddr": "10.0.0.2", 00:29:59.529 "adrfam": "ipv4", 00:29:59.529 "trsvcid": "4420", 00:29:59.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:59.529 "hdgst": false, 00:29:59.529 "ddgst": false 00:29:59.529 }, 00:29:59.529 "method": "bdev_nvme_attach_controller" 00:29:59.529 }' 00:29:59.529 [2024-11-19 10:57:49.134055] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:59.529 [2024-11-19 10:57:49.134103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084977 ] 00:29:59.529 [2024-11-19 10:57:49.206766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.529 [2024-11-19 10:57:49.244661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.786 Running I/O for 15 seconds... 00:30:01.651 11243.00 IOPS, 43.92 MiB/s [2024-11-19T09:57:52.381Z] 11311.00 IOPS, 44.18 MiB/s [2024-11-19T09:57:52.381Z] 10:57:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4084526 00:30:02.589 10:57:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:02.589 [2024-11-19 10:57:52.104050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.589 [2024-11-19 10:57:52.104463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.589 [2024-11-19 10:57:52.104470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.104990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.104996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.105004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.105010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.105018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.105025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.105032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.590 [2024-11-19 10:57:52.105038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.590 [2024-11-19 10:57:52.105046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.591 [2024-11-19 10:57:52.105195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.591 [2024-11-19 10:57:52.105711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.591 [2024-11-19 10:57:52.105717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.592 [2024-11-19 10:57:52.105735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.592 [2024-11-19 10:57:52.105750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.592 [2024-11-19 10:57:52.105765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.592 [2024-11-19 10:57:52.105779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.592 [2024-11-19 10:57:52.105793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.592 [2024-11-19 10:57:52.105808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.592 [2024-11-19 10:57:52.105822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.592 [2024-11-19 10:57:52.105836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.592 [2024-11-19 10:57:52.105850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.592 [2024-11-19 10:57:52.105864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.105879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.105893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.105916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.105931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.105947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.105962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.105976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.105990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.105998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.106004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.106012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.106018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.106026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.106032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.106040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.106046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.106054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.106060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.106068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.106074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.106082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.592 [2024-11-19 10:57:52.106088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.106096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.592 [2024-11-19 10:57:52.106104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.106111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06cf0 is same with the state(6) to be set 00:30:02.592 [2024-11-19 10:57:52.106119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:02.592 [2024-11-19 10:57:52.106125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:02.592 [2024-11-19 10:57:52.106130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112648 len:8 PRP1 0x0 PRP2 0x0 00:30:02.592 [2024-11-19 10:57:52.106138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.592 [2024-11-19 10:57:52.108914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.592 [2024-11-19 10:57:52.108967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.592 [2024-11-19 10:57:52.109572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.592 [2024-11-19 10:57:52.109589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.592 [2024-11-19 10:57:52.109597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.592 [2024-11-19 10:57:52.109769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.592 [2024-11-19 10:57:52.109942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.592 [2024-11-19 10:57:52.109949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.592 [2024-11-19 10:57:52.109958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.592 [2024-11-19 10:57:52.109965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.592 [2024-11-19 10:57:52.122111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.592 [2024-11-19 10:57:52.122547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.592 [2024-11-19 10:57:52.122565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.592 [2024-11-19 10:57:52.122572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.592 [2024-11-19 10:57:52.122745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.592 [2024-11-19 10:57:52.122918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.592 [2024-11-19 10:57:52.122926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.592 [2024-11-19 10:57:52.122933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.592 [2024-11-19 10:57:52.122939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.592 [2024-11-19 10:57:52.134951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.592 [2024-11-19 10:57:52.135395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.592 [2024-11-19 10:57:52.135413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.592 [2024-11-19 10:57:52.135420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.592 [2024-11-19 10:57:52.135588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.592 [2024-11-19 10:57:52.135758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.593 [2024-11-19 10:57:52.135767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.593 [2024-11-19 10:57:52.135773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.593 [2024-11-19 10:57:52.135779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.593 [2024-11-19 10:57:52.147798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.593 [2024-11-19 10:57:52.148165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.593 [2024-11-19 10:57:52.148181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.593 [2024-11-19 10:57:52.148188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.593 [2024-11-19 10:57:52.148360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.593 [2024-11-19 10:57:52.148528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.593 [2024-11-19 10:57:52.148537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.593 [2024-11-19 10:57:52.148543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.593 [2024-11-19 10:57:52.148549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.593 [2024-11-19 10:57:52.160639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.593 [2024-11-19 10:57:52.161050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.593 [2024-11-19 10:57:52.161066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.593 [2024-11-19 10:57:52.161073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.593 [2024-11-19 10:57:52.161245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.593 [2024-11-19 10:57:52.161413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.593 [2024-11-19 10:57:52.161420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.593 [2024-11-19 10:57:52.161427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.593 [2024-11-19 10:57:52.161433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.593 [2024-11-19 10:57:52.173542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.593 [2024-11-19 10:57:52.173948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.593 [2024-11-19 10:57:52.173991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.593 [2024-11-19 10:57:52.174014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.593 [2024-11-19 10:57:52.174605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.593 [2024-11-19 10:57:52.175138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.593 [2024-11-19 10:57:52.175146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.593 [2024-11-19 10:57:52.175156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.593 [2024-11-19 10:57:52.175163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.593 [2024-11-19 10:57:52.186371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.593 [2024-11-19 10:57:52.186800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.593 [2024-11-19 10:57:52.186843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.593 [2024-11-19 10:57:52.186867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.593 [2024-11-19 10:57:52.187456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.593 [2024-11-19 10:57:52.188012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.593 [2024-11-19 10:57:52.188020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.593 [2024-11-19 10:57:52.188026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.593 [2024-11-19 10:57:52.188032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.593 [2024-11-19 10:57:52.199106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.593 [2024-11-19 10:57:52.199561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.593 [2024-11-19 10:57:52.199607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.593 [2024-11-19 10:57:52.199632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.593 [2024-11-19 10:57:52.200097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.593 [2024-11-19 10:57:52.200270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.593 [2024-11-19 10:57:52.200279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.593 [2024-11-19 10:57:52.200285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.593 [2024-11-19 10:57:52.200292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.593 [2024-11-19 10:57:52.211889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.593 [2024-11-19 10:57:52.212275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.593 [2024-11-19 10:57:52.212290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.593 [2024-11-19 10:57:52.212297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.593 [2024-11-19 10:57:52.212455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.593 [2024-11-19 10:57:52.212612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.593 [2024-11-19 10:57:52.212619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.593 [2024-11-19 10:57:52.212625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.593 [2024-11-19 10:57:52.212631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.593 [2024-11-19 10:57:52.224683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.593 [2024-11-19 10:57:52.225087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.593 [2024-11-19 10:57:52.225131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.593 [2024-11-19 10:57:52.225155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.593 [2024-11-19 10:57:52.225750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.593 [2024-11-19 10:57:52.226210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.593 [2024-11-19 10:57:52.226219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.593 [2024-11-19 10:57:52.226225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.593 [2024-11-19 10:57:52.226231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.593 [2024-11-19 10:57:52.237422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.593 [2024-11-19 10:57:52.237824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.593 [2024-11-19 10:57:52.237868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.593 [2024-11-19 10:57:52.237891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.593 [2024-11-19 10:57:52.238356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.593 [2024-11-19 10:57:52.238524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.593 [2024-11-19 10:57:52.238532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.594 [2024-11-19 10:57:52.238538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.594 [2024-11-19 10:57:52.238544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.594 [2024-11-19 10:57:52.250206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.594 [2024-11-19 10:57:52.250603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.594 [2024-11-19 10:57:52.250619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.594 [2024-11-19 10:57:52.250626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.594 [2024-11-19 10:57:52.250793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.594 [2024-11-19 10:57:52.250959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.594 [2024-11-19 10:57:52.250968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.594 [2024-11-19 10:57:52.250974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.594 [2024-11-19 10:57:52.250980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.594 [2024-11-19 10:57:52.263099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.594 [2024-11-19 10:57:52.263527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.594 [2024-11-19 10:57:52.263543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.594 [2024-11-19 10:57:52.263553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.594 [2024-11-19 10:57:52.263720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.594 [2024-11-19 10:57:52.263887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.594 [2024-11-19 10:57:52.263895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.594 [2024-11-19 10:57:52.263901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.594 [2024-11-19 10:57:52.263908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.594 [2024-11-19 10:57:52.275824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.594 [2024-11-19 10:57:52.276261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.594 [2024-11-19 10:57:52.276307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.594 [2024-11-19 10:57:52.276330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.594 [2024-11-19 10:57:52.276908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.594 [2024-11-19 10:57:52.277144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.594 [2024-11-19 10:57:52.277151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.594 [2024-11-19 10:57:52.277158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.594 [2024-11-19 10:57:52.277164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.594 [2024-11-19 10:57:52.288635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.594 [2024-11-19 10:57:52.289053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.594 [2024-11-19 10:57:52.289096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.594 [2024-11-19 10:57:52.289120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.594 [2024-11-19 10:57:52.289711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.594 [2024-11-19 10:57:52.290280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.594 [2024-11-19 10:57:52.290288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.594 [2024-11-19 10:57:52.290294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.594 [2024-11-19 10:57:52.290300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.594 [2024-11-19 10:57:52.301416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.594 [2024-11-19 10:57:52.301842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.594 [2024-11-19 10:57:52.301857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.594 [2024-11-19 10:57:52.301864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.594 [2024-11-19 10:57:52.302031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.594 [2024-11-19 10:57:52.302210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.594 [2024-11-19 10:57:52.302219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.594 [2024-11-19 10:57:52.302225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.594 [2024-11-19 10:57:52.302231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.594 [2024-11-19 10:57:52.314225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.594 [2024-11-19 10:57:52.314635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.594 [2024-11-19 10:57:52.314680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.594 [2024-11-19 10:57:52.314703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.594 [2024-11-19 10:57:52.315184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.594 [2024-11-19 10:57:52.315372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.594 [2024-11-19 10:57:52.315381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.594 [2024-11-19 10:57:52.315387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.594 [2024-11-19 10:57:52.315393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.594 [2024-11-19 10:57:52.327106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.594 [2024-11-19 10:57:52.327531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.594 [2024-11-19 10:57:52.327547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.594 [2024-11-19 10:57:52.327554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.594 [2024-11-19 10:57:52.327721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.594 [2024-11-19 10:57:52.327889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.594 [2024-11-19 10:57:52.327897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.594 [2024-11-19 10:57:52.327903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.594 [2024-11-19 10:57:52.327909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.594 [2024-11-19 10:57:52.339931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.594 [2024-11-19 10:57:52.340367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.594 [2024-11-19 10:57:52.340411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.594 [2024-11-19 10:57:52.340433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.594 [2024-11-19 10:57:52.341011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.594 [2024-11-19 10:57:52.341498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.594 [2024-11-19 10:57:52.341506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.594 [2024-11-19 10:57:52.341516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.594 [2024-11-19 10:57:52.341523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.594 [2024-11-19 10:57:52.352774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.594 [2024-11-19 10:57:52.353169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.594 [2024-11-19 10:57:52.353185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.594 [2024-11-19 10:57:52.353192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.594 [2024-11-19 10:57:52.353377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.594 [2024-11-19 10:57:52.353544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.594 [2024-11-19 10:57:52.353552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.594 [2024-11-19 10:57:52.353558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.594 [2024-11-19 10:57:52.353564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.594 [2024-11-19 10:57:52.365571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.594 [2024-11-19 10:57:52.365988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.595 [2024-11-19 10:57:52.366004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.595 [2024-11-19 10:57:52.366012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.595 [2024-11-19 10:57:52.366184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.595 [2024-11-19 10:57:52.366362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.595 [2024-11-19 10:57:52.366371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.595 [2024-11-19 10:57:52.366379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.595 [2024-11-19 10:57:52.366385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.856 [2024-11-19 10:57:52.378522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.856 [2024-11-19 10:57:52.378902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.856 [2024-11-19 10:57:52.378918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.856 [2024-11-19 10:57:52.378926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.856 [2024-11-19 10:57:52.379098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.856 [2024-11-19 10:57:52.379277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.856 [2024-11-19 10:57:52.379286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.856 [2024-11-19 10:57:52.379292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.856 [2024-11-19 10:57:52.379298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.856 [2024-11-19 10:57:52.391601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.856 [2024-11-19 10:57:52.391969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.856 [2024-11-19 10:57:52.391985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.856 [2024-11-19 10:57:52.391993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.856 [2024-11-19 10:57:52.392164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.856 [2024-11-19 10:57:52.392342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.856 [2024-11-19 10:57:52.392351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.856 [2024-11-19 10:57:52.392357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.856 [2024-11-19 10:57:52.392363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.856 [2024-11-19 10:57:52.404511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.856 [2024-11-19 10:57:52.404891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.856 [2024-11-19 10:57:52.404907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.856 [2024-11-19 10:57:52.404914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.856 [2024-11-19 10:57:52.405080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.856 [2024-11-19 10:57:52.405250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.856 [2024-11-19 10:57:52.405258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.856 [2024-11-19 10:57:52.405264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.856 [2024-11-19 10:57:52.405271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.856 10149.67 IOPS, 39.65 MiB/s [2024-11-19T09:57:52.648Z] [2024-11-19 10:57:52.418637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.856 [2024-11-19 10:57:52.419066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.856 [2024-11-19 10:57:52.419110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.856 [2024-11-19 10:57:52.419133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.856 [2024-11-19 10:57:52.419612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.856 [2024-11-19 10:57:52.419785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.856 [2024-11-19 10:57:52.419793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.856 [2024-11-19 10:57:52.419799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.856 [2024-11-19 10:57:52.419805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.856 [2024-11-19 10:57:52.431383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.856 [2024-11-19 10:57:52.431806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.856 [2024-11-19 10:57:52.431822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.856 [2024-11-19 10:57:52.431832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.856 [2024-11-19 10:57:52.431990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.856 [2024-11-19 10:57:52.432148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.856 [2024-11-19 10:57:52.432156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.856 [2024-11-19 10:57:52.432162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.856 [2024-11-19 10:57:52.432167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.856 [2024-11-19 10:57:52.444102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.856 [2024-11-19 10:57:52.444514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.856 [2024-11-19 10:57:52.444550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.856 [2024-11-19 10:57:52.444575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.856 [2024-11-19 10:57:52.445157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.856 [2024-11-19 10:57:52.445556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.856 [2024-11-19 10:57:52.445574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.856 [2024-11-19 10:57:52.445587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.856 [2024-11-19 10:57:52.445600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.856 [2024-11-19 10:57:52.459041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.856 [2024-11-19 10:57:52.459540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.856 [2024-11-19 10:57:52.459562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.856 [2024-11-19 10:57:52.459572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.856 [2024-11-19 10:57:52.459824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.856 [2024-11-19 10:57:52.460079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.856 [2024-11-19 10:57:52.460090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.856 [2024-11-19 10:57:52.460100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.856 [2024-11-19 10:57:52.460109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.856 [2024-11-19 10:57:52.472075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.856 [2024-11-19 10:57:52.472407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.856 [2024-11-19 10:57:52.472423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.857 [2024-11-19 10:57:52.472431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.857 [2024-11-19 10:57:52.472602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.857 [2024-11-19 10:57:52.472777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.857 [2024-11-19 10:57:52.472785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.857 [2024-11-19 10:57:52.472792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.857 [2024-11-19 10:57:52.472798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.857 [2024-11-19 10:57:52.484822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.857 [2024-11-19 10:57:52.485223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.857 [2024-11-19 10:57:52.485268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.857 [2024-11-19 10:57:52.485291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.857 [2024-11-19 10:57:52.485843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.857 [2024-11-19 10:57:52.486242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.857 [2024-11-19 10:57:52.486260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.857 [2024-11-19 10:57:52.486273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.857 [2024-11-19 10:57:52.486287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.857 [2024-11-19 10:57:52.499510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.857 [2024-11-19 10:57:52.500038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.857 [2024-11-19 10:57:52.500081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.857 [2024-11-19 10:57:52.500104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.857 [2024-11-19 10:57:52.500595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.857 [2024-11-19 10:57:52.500849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.857 [2024-11-19 10:57:52.500860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.857 [2024-11-19 10:57:52.500870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.857 [2024-11-19 10:57:52.500878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.857 [2024-11-19 10:57:52.512476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.857 [2024-11-19 10:57:52.512864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.857 [2024-11-19 10:57:52.512881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.857 [2024-11-19 10:57:52.512887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.857 [2024-11-19 10:57:52.513054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.857 [2024-11-19 10:57:52.513227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.857 [2024-11-19 10:57:52.513235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.857 [2024-11-19 10:57:52.513245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.857 [2024-11-19 10:57:52.513252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.857 [2024-11-19 10:57:52.525256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.857 [2024-11-19 10:57:52.525653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.857 [2024-11-19 10:57:52.525696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.857 [2024-11-19 10:57:52.525719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.857 [2024-11-19 10:57:52.526311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.857 [2024-11-19 10:57:52.526718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.857 [2024-11-19 10:57:52.526735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.857 [2024-11-19 10:57:52.526749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.857 [2024-11-19 10:57:52.526763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.857 [2024-11-19 10:57:52.540225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.857 [2024-11-19 10:57:52.540714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.857 [2024-11-19 10:57:52.540735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.857 [2024-11-19 10:57:52.540746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.857 [2024-11-19 10:57:52.540998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.857 [2024-11-19 10:57:52.541259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.857 [2024-11-19 10:57:52.541270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.857 [2024-11-19 10:57:52.541280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.857 [2024-11-19 10:57:52.541289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.857 [2024-11-19 10:57:52.553287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.857 [2024-11-19 10:57:52.553690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.857 [2024-11-19 10:57:52.553706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.857 [2024-11-19 10:57:52.553713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.857 [2024-11-19 10:57:52.553885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.857 [2024-11-19 10:57:52.554056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.857 [2024-11-19 10:57:52.554064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.857 [2024-11-19 10:57:52.554070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.857 [2024-11-19 10:57:52.554077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.857 [2024-11-19 10:57:52.566026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.857 [2024-11-19 10:57:52.566462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.857 [2024-11-19 10:57:52.566507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.857 [2024-11-19 10:57:52.566530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.857 [2024-11-19 10:57:52.566797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.857 [2024-11-19 10:57:52.566964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.857 [2024-11-19 10:57:52.566972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.857 [2024-11-19 10:57:52.566978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.857 [2024-11-19 10:57:52.566984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.857 [2024-11-19 10:57:52.578734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.857 [2024-11-19 10:57:52.579129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.857 [2024-11-19 10:57:52.579171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.857 [2024-11-19 10:57:52.579193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.857 [2024-11-19 10:57:52.579657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.857 [2024-11-19 10:57:52.579823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.857 [2024-11-19 10:57:52.579831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.857 [2024-11-19 10:57:52.579837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.857 [2024-11-19 10:57:52.579844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.857 [2024-11-19 10:57:52.591573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.857 [2024-11-19 10:57:52.591961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.857 [2024-11-19 10:57:52.591976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.857 [2024-11-19 10:57:52.591983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.857 [2024-11-19 10:57:52.592140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.857 [2024-11-19 10:57:52.592323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.857 [2024-11-19 10:57:52.592332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.857 [2024-11-19 10:57:52.592338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.857 [2024-11-19 10:57:52.592344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.857 [2024-11-19 10:57:52.604305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.858 [2024-11-19 10:57:52.604679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.858 [2024-11-19 10:57:52.604695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.858 [2024-11-19 10:57:52.604707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.858 [2024-11-19 10:57:52.604873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.858 [2024-11-19 10:57:52.605040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.858 [2024-11-19 10:57:52.605048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.858 [2024-11-19 10:57:52.605054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.858 [2024-11-19 10:57:52.605060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.858 [2024-11-19 10:57:52.617230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.858 [2024-11-19 10:57:52.617656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.858 [2024-11-19 10:57:52.617673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.858 [2024-11-19 10:57:52.617681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.858 [2024-11-19 10:57:52.617852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.858 [2024-11-19 10:57:52.618024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.858 [2024-11-19 10:57:52.618033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.858 [2024-11-19 10:57:52.618040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.858 [2024-11-19 10:57:52.618047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.858 [2024-11-19 10:57:52.630237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.858 [2024-11-19 10:57:52.630653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.858 [2024-11-19 10:57:52.630696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.858 [2024-11-19 10:57:52.630719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.858 [2024-11-19 10:57:52.631256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:02.858 [2024-11-19 10:57:52.631428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.858 [2024-11-19 10:57:52.631437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.858 [2024-11-19 10:57:52.631443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.858 [2024-11-19 10:57:52.631449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.858 [2024-11-19 10:57:52.643268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.858 [2024-11-19 10:57:52.643666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.858 [2024-11-19 10:57:52.643682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:02.858 [2024-11-19 10:57:52.643689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:02.858 [2024-11-19 10:57:52.643860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.119 [2024-11-19 10:57:52.644034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.119 [2024-11-19 10:57:52.644043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.119 [2024-11-19 10:57:52.644050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.119 [2024-11-19 10:57:52.644056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.119 [2024-11-19 10:57:52.656272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.119 [2024-11-19 10:57:52.656685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.119 [2024-11-19 10:57:52.656700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.119 [2024-11-19 10:57:52.656707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.119 [2024-11-19 10:57:52.656874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.119 [2024-11-19 10:57:52.657041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.119 [2024-11-19 10:57:52.657049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.119 [2024-11-19 10:57:52.657055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.119 [2024-11-19 10:57:52.657061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.119 [2024-11-19 10:57:52.669124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.119 [2024-11-19 10:57:52.669487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.119 [2024-11-19 10:57:52.669504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.119 [2024-11-19 10:57:52.669511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.119 [2024-11-19 10:57:52.669683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.119 [2024-11-19 10:57:52.669856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.119 [2024-11-19 10:57:52.669864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.119 [2024-11-19 10:57:52.669870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.119 [2024-11-19 10:57:52.669877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.119 [2024-11-19 10:57:52.681976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.119 [2024-11-19 10:57:52.682320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.119 [2024-11-19 10:57:52.682336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.119 [2024-11-19 10:57:52.682343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.119 [2024-11-19 10:57:52.682501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.119 [2024-11-19 10:57:52.682660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.119 [2024-11-19 10:57:52.682667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.119 [2024-11-19 10:57:52.682677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.119 [2024-11-19 10:57:52.682683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.119 [2024-11-19 10:57:52.694692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.119 [2024-11-19 10:57:52.695081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.119 [2024-11-19 10:57:52.695096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.119 [2024-11-19 10:57:52.695103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.119 [2024-11-19 10:57:52.695284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.119 [2024-11-19 10:57:52.695451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.119 [2024-11-19 10:57:52.695459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.119 [2024-11-19 10:57:52.695465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.119 [2024-11-19 10:57:52.695471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.119 [2024-11-19 10:57:52.707530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.119 [2024-11-19 10:57:52.707915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.119 [2024-11-19 10:57:52.707931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.119 [2024-11-19 10:57:52.707964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.119 [2024-11-19 10:57:52.708500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.119 [2024-11-19 10:57:52.708668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.119 [2024-11-19 10:57:52.708676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.119 [2024-11-19 10:57:52.708682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.119 [2024-11-19 10:57:52.708688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.119 [2024-11-19 10:57:52.720320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.119 [2024-11-19 10:57:52.720728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.119 [2024-11-19 10:57:52.720744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.119 [2024-11-19 10:57:52.720750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.119 [2024-11-19 10:57:52.720909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.119 [2024-11-19 10:57:52.721067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.119 [2024-11-19 10:57:52.721074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.119 [2024-11-19 10:57:52.721080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.119 [2024-11-19 10:57:52.721086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.119 [2024-11-19 10:57:52.733265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.119 [2024-11-19 10:57:52.733683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.120 [2024-11-19 10:57:52.733698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.120 [2024-11-19 10:57:52.733705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.120 [2024-11-19 10:57:52.733872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.120 [2024-11-19 10:57:52.734043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.120 [2024-11-19 10:57:52.734051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.120 [2024-11-19 10:57:52.734058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.120 [2024-11-19 10:57:52.734064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.120 [2024-11-19 10:57:52.745993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.120 [2024-11-19 10:57:52.746406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.120 [2024-11-19 10:57:52.746422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.120 [2024-11-19 10:57:52.746429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.120 [2024-11-19 10:57:52.746597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.120 [2024-11-19 10:57:52.746763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.120 [2024-11-19 10:57:52.746771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.120 [2024-11-19 10:57:52.746777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.120 [2024-11-19 10:57:52.746783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.120 [2024-11-19 10:57:52.758719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.120 [2024-11-19 10:57:52.759103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.120 [2024-11-19 10:57:52.759118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.120 [2024-11-19 10:57:52.759124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.120 [2024-11-19 10:57:52.759307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.120 [2024-11-19 10:57:52.759474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.120 [2024-11-19 10:57:52.759482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.120 [2024-11-19 10:57:52.759489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.120 [2024-11-19 10:57:52.759495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.120 [2024-11-19 10:57:52.771556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.120 [2024-11-19 10:57:52.771948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.120 [2024-11-19 10:57:52.771991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.120 [2024-11-19 10:57:52.772021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.120 [2024-11-19 10:57:52.772526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.120 [2024-11-19 10:57:52.772693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.120 [2024-11-19 10:57:52.772701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.120 [2024-11-19 10:57:52.772707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.120 [2024-11-19 10:57:52.772713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.120 [2024-11-19 10:57:52.784305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.120 [2024-11-19 10:57:52.784695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.120 [2024-11-19 10:57:52.784711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.120 [2024-11-19 10:57:52.784717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.120 [2024-11-19 10:57:52.784875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.120 [2024-11-19 10:57:52.785033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.120 [2024-11-19 10:57:52.785041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.120 [2024-11-19 10:57:52.785047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.120 [2024-11-19 10:57:52.785053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.120 [2024-11-19 10:57:52.797015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.120 [2024-11-19 10:57:52.797346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.120 [2024-11-19 10:57:52.797363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.120 [2024-11-19 10:57:52.797370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.120 [2024-11-19 10:57:52.797536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.120 [2024-11-19 10:57:52.797702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.120 [2024-11-19 10:57:52.797710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.120 [2024-11-19 10:57:52.797717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.120 [2024-11-19 10:57:52.797723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.120 [2024-11-19 10:57:52.810013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.120 [2024-11-19 10:57:52.810449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.120 [2024-11-19 10:57:52.810466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.120 [2024-11-19 10:57:52.810474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.120 [2024-11-19 10:57:52.810641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.120 [2024-11-19 10:57:52.810814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.120 [2024-11-19 10:57:52.810825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.120 [2024-11-19 10:57:52.810833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.120 [2024-11-19 10:57:52.810840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.120 [2024-11-19 10:57:52.822945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.120 [2024-11-19 10:57:52.823358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.120 [2024-11-19 10:57:52.823376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.120 [2024-11-19 10:57:52.823383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.120 [2024-11-19 10:57:52.823550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.120 [2024-11-19 10:57:52.823719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.120 [2024-11-19 10:57:52.823727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.120 [2024-11-19 10:57:52.823734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.120 [2024-11-19 10:57:52.823740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.120 [2024-11-19 10:57:52.835767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.120 [2024-11-19 10:57:52.836172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.120 [2024-11-19 10:57:52.836189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.120 [2024-11-19 10:57:52.836197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.120 [2024-11-19 10:57:52.836369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.120 [2024-11-19 10:57:52.836537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.120 [2024-11-19 10:57:52.836546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.120 [2024-11-19 10:57:52.836554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.120 [2024-11-19 10:57:52.836561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.120 [2024-11-19 10:57:52.848612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.120 [2024-11-19 10:57:52.848938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.120 [2024-11-19 10:57:52.848955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.120 [2024-11-19 10:57:52.848962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.120 [2024-11-19 10:57:52.849129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.120 [2024-11-19 10:57:52.849301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.120 [2024-11-19 10:57:52.849310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.120 [2024-11-19 10:57:52.849316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.120 [2024-11-19 10:57:52.849326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.120 [2024-11-19 10:57:52.861585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.120 [2024-11-19 10:57:52.862005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.121 [2024-11-19 10:57:52.862021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.121 [2024-11-19 10:57:52.862028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.121 [2024-11-19 10:57:52.862194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.121 [2024-11-19 10:57:52.862370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.121 [2024-11-19 10:57:52.862380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.121 [2024-11-19 10:57:52.862387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.121 [2024-11-19 10:57:52.862393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.121 [2024-11-19 10:57:52.874400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.121 [2024-11-19 10:57:52.874753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.121 [2024-11-19 10:57:52.874772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.121 [2024-11-19 10:57:52.874779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.121 [2024-11-19 10:57:52.874951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.121 [2024-11-19 10:57:52.875122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.121 [2024-11-19 10:57:52.875131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.121 [2024-11-19 10:57:52.875138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.121 [2024-11-19 10:57:52.875145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.121 [2024-11-19 10:57:52.887406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.121 [2024-11-19 10:57:52.887695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.121 [2024-11-19 10:57:52.887711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.121 [2024-11-19 10:57:52.887719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.121 [2024-11-19 10:57:52.887890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.121 [2024-11-19 10:57:52.888062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.121 [2024-11-19 10:57:52.888071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.121 [2024-11-19 10:57:52.888077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.121 [2024-11-19 10:57:52.888083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.121 [2024-11-19 10:57:52.900426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.121 [2024-11-19 10:57:52.900874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.121 [2024-11-19 10:57:52.900917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.121 [2024-11-19 10:57:52.900940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.121 [2024-11-19 10:57:52.901457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.121 [2024-11-19 10:57:52.901629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.121 [2024-11-19 10:57:52.901638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.121 [2024-11-19 10:57:52.901644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.121 [2024-11-19 10:57:52.901650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.382 [2024-11-19 10:57:52.913416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.382 [2024-11-19 10:57:52.913822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.382 [2024-11-19 10:57:52.913839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.382 [2024-11-19 10:57:52.913846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.382 [2024-11-19 10:57:52.914016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.382 [2024-11-19 10:57:52.914187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.382 [2024-11-19 10:57:52.914195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.382 [2024-11-19 10:57:52.914208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.382 [2024-11-19 10:57:52.914215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.382 [2024-11-19 10:57:52.926295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.382 [2024-11-19 10:57:52.926675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.382 [2024-11-19 10:57:52.926719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.382 [2024-11-19 10:57:52.926741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.382 [2024-11-19 10:57:52.927332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.382 [2024-11-19 10:57:52.927894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.382 [2024-11-19 10:57:52.927902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.382 [2024-11-19 10:57:52.927908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.382 [2024-11-19 10:57:52.927914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.382 [2024-11-19 10:57:52.939042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.382 [2024-11-19 10:57:52.939348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.382 [2024-11-19 10:57:52.939365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.382 [2024-11-19 10:57:52.939372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.382 [2024-11-19 10:57:52.939542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.382 [2024-11-19 10:57:52.939710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.382 [2024-11-19 10:57:52.939718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.382 [2024-11-19 10:57:52.939724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.382 [2024-11-19 10:57:52.939730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.382 [2024-11-19 10:57:52.951807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.382 [2024-11-19 10:57:52.952185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.382 [2024-11-19 10:57:52.952200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.382 [2024-11-19 10:57:52.952211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.382 [2024-11-19 10:57:52.952378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.382 [2024-11-19 10:57:52.952544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.382 [2024-11-19 10:57:52.952552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.382 [2024-11-19 10:57:52.952558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.382 [2024-11-19 10:57:52.952564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.382 [2024-11-19 10:57:52.964632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.382 [2024-11-19 10:57:52.965055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.382 [2024-11-19 10:57:52.965071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.382 [2024-11-19 10:57:52.965078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.382 [2024-11-19 10:57:52.965255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.382 [2024-11-19 10:57:52.965428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.382 [2024-11-19 10:57:52.965436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.382 [2024-11-19 10:57:52.965442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.382 [2024-11-19 10:57:52.965448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.382 [2024-11-19 10:57:52.977620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.382 [2024-11-19 10:57:52.978022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.382 [2024-11-19 10:57:52.978038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.382 [2024-11-19 10:57:52.978045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.382 [2024-11-19 10:57:52.978222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.382 [2024-11-19 10:57:52.978393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.382 [2024-11-19 10:57:52.978404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.382 [2024-11-19 10:57:52.978410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.382 [2024-11-19 10:57:52.978417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.382 [2024-11-19 10:57:52.990713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.382 [2024-11-19 10:57:52.991160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.382 [2024-11-19 10:57:52.991177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.382 [2024-11-19 10:57:52.991184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.382 [2024-11-19 10:57:52.991373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.382 [2024-11-19 10:57:52.991555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.382 [2024-11-19 10:57:52.991564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.382 [2024-11-19 10:57:52.991571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.382 [2024-11-19 10:57:52.991577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.382 [2024-11-19 10:57:53.003956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.382 [2024-11-19 10:57:53.004325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.383 [2024-11-19 10:57:53.004342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.383 [2024-11-19 10:57:53.004350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.383 [2024-11-19 10:57:53.004532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.383 [2024-11-19 10:57:53.004715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.383 [2024-11-19 10:57:53.004723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.383 [2024-11-19 10:57:53.004730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.383 [2024-11-19 10:57:53.004737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.383 [2024-11-19 10:57:53.016963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.383 [2024-11-19 10:57:53.017378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.383 [2024-11-19 10:57:53.017395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.383 [2024-11-19 10:57:53.017403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.383 [2024-11-19 10:57:53.017585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.383 [2024-11-19 10:57:53.017768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.383 [2024-11-19 10:57:53.017777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.383 [2024-11-19 10:57:53.017783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.383 [2024-11-19 10:57:53.017793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.383 [2024-11-19 10:57:53.029964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.383 [2024-11-19 10:57:53.030392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.383 [2024-11-19 10:57:53.030409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.383 [2024-11-19 10:57:53.030416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.383 [2024-11-19 10:57:53.030588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.383 [2024-11-19 10:57:53.030760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.383 [2024-11-19 10:57:53.030768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.383 [2024-11-19 10:57:53.030774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.383 [2024-11-19 10:57:53.030780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.383 [2024-11-19 10:57:53.043105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.383 [2024-11-19 10:57:53.043551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.383 [2024-11-19 10:57:53.043568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.383 [2024-11-19 10:57:53.043576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.383 [2024-11-19 10:57:53.043758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.383 [2024-11-19 10:57:53.043942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.383 [2024-11-19 10:57:53.043950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.383 [2024-11-19 10:57:53.043957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.383 [2024-11-19 10:57:53.043964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.383 [2024-11-19 10:57:53.056349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.383 [2024-11-19 10:57:53.056702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.383 [2024-11-19 10:57:53.056719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.383 [2024-11-19 10:57:53.056727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.383 [2024-11-19 10:57:53.056909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.383 [2024-11-19 10:57:53.057093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.383 [2024-11-19 10:57:53.057101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.383 [2024-11-19 10:57:53.057108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.383 [2024-11-19 10:57:53.057115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.383 [2024-11-19 10:57:53.069729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.383 [2024-11-19 10:57:53.070166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.383 [2024-11-19 10:57:53.070183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.383 [2024-11-19 10:57:53.070191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.383 [2024-11-19 10:57:53.070392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.383 [2024-11-19 10:57:53.070587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.383 [2024-11-19 10:57:53.070596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.383 [2024-11-19 10:57:53.070604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.383 [2024-11-19 10:57:53.070611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.383 [2024-11-19 10:57:53.082695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.383 [2024-11-19 10:57:53.083101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.383 [2024-11-19 10:57:53.083118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.383 [2024-11-19 10:57:53.083125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.383 [2024-11-19 10:57:53.083301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.383 [2024-11-19 10:57:53.083473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.383 [2024-11-19 10:57:53.083482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.383 [2024-11-19 10:57:53.083488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.383 [2024-11-19 10:57:53.083494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.383 [2024-11-19 10:57:53.095889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.383 [2024-11-19 10:57:53.096292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.383 [2024-11-19 10:57:53.096309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.383 [2024-11-19 10:57:53.096316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.383 [2024-11-19 10:57:53.096487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.383 [2024-11-19 10:57:53.096659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.383 [2024-11-19 10:57:53.096667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.383 [2024-11-19 10:57:53.096674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.383 [2024-11-19 10:57:53.096680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.383 [2024-11-19 10:57:53.109063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.383 [2024-11-19 10:57:53.109507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.383 [2024-11-19 10:57:53.109524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.383 [2024-11-19 10:57:53.109531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.383 [2024-11-19 10:57:53.109717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.383 [2024-11-19 10:57:53.109898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.383 [2024-11-19 10:57:53.109907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.383 [2024-11-19 10:57:53.109914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.383 [2024-11-19 10:57:53.109920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.383 [2024-11-19 10:57:53.122356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.383 [2024-11-19 10:57:53.122792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.383 [2024-11-19 10:57:53.122808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.383 [2024-11-19 10:57:53.122816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.383 [2024-11-19 10:57:53.122998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.383 [2024-11-19 10:57:53.123181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.383 [2024-11-19 10:57:53.123190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.383 [2024-11-19 10:57:53.123197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.383 [2024-11-19 10:57:53.123209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.384 [2024-11-19 10:57:53.135559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.384 [2024-11-19 10:57:53.135999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.384 [2024-11-19 10:57:53.136017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.384 [2024-11-19 10:57:53.136024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.384 [2024-11-19 10:57:53.136212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.384 [2024-11-19 10:57:53.136423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.384 [2024-11-19 10:57:53.136432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.384 [2024-11-19 10:57:53.136439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.384 [2024-11-19 10:57:53.136447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.384 [2024-11-19 10:57:53.148637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.384 [2024-11-19 10:57:53.148907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.384 [2024-11-19 10:57:53.148924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.384 [2024-11-19 10:57:53.148932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.384 [2024-11-19 10:57:53.149114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.384 [2024-11-19 10:57:53.149303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.384 [2024-11-19 10:57:53.149315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.384 [2024-11-19 10:57:53.149322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.384 [2024-11-19 10:57:53.149329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.384 [2024-11-19 10:57:53.161641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.384 [2024-11-19 10:57:53.161975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.384 [2024-11-19 10:57:53.161991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.384 [2024-11-19 10:57:53.161998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.384 [2024-11-19 10:57:53.162169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.384 [2024-11-19 10:57:53.162348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.384 [2024-11-19 10:57:53.162356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.384 [2024-11-19 10:57:53.162363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.384 [2024-11-19 10:57:53.162369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.644 [2024-11-19 10:57:53.174669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.644 [2024-11-19 10:57:53.175037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.644 [2024-11-19 10:57:53.175053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.644 [2024-11-19 10:57:53.175060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.645 [2024-11-19 10:57:53.175238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.645 [2024-11-19 10:57:53.175411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.645 [2024-11-19 10:57:53.175420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.645 [2024-11-19 10:57:53.175426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.645 [2024-11-19 10:57:53.175432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.645 [2024-11-19 10:57:53.187684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.645 [2024-11-19 10:57:53.188047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-11-19 10:57:53.188063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.645 [2024-11-19 10:57:53.188071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.645 [2024-11-19 10:57:53.188249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.645 [2024-11-19 10:57:53.188421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.645 [2024-11-19 10:57:53.188429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.645 [2024-11-19 10:57:53.188436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.645 [2024-11-19 10:57:53.188446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.645 [2024-11-19 10:57:53.200780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.645 [2024-11-19 10:57:53.201067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-11-19 10:57:53.201084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.645 [2024-11-19 10:57:53.201091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.645 [2024-11-19 10:57:53.201270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.645 [2024-11-19 10:57:53.201443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.645 [2024-11-19 10:57:53.201452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.645 [2024-11-19 10:57:53.201458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.645 [2024-11-19 10:57:53.201464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.645 [2024-11-19 10:57:53.213617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.645 [2024-11-19 10:57:53.213890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-11-19 10:57:53.213906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.645 [2024-11-19 10:57:53.213913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.645 [2024-11-19 10:57:53.214080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.645 [2024-11-19 10:57:53.214252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.645 [2024-11-19 10:57:53.214261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.645 [2024-11-19 10:57:53.214267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.645 [2024-11-19 10:57:53.214274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.645 [2024-11-19 10:57:53.226480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.645 [2024-11-19 10:57:53.226857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-11-19 10:57:53.226873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.645 [2024-11-19 10:57:53.226880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.645 [2024-11-19 10:57:53.227046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.645 [2024-11-19 10:57:53.227219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.645 [2024-11-19 10:57:53.227228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.645 [2024-11-19 10:57:53.227234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.645 [2024-11-19 10:57:53.227240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.645 [2024-11-19 10:57:53.239434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.645 [2024-11-19 10:57:53.239867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-11-19 10:57:53.239918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.645 [2024-11-19 10:57:53.239942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.645 [2024-11-19 10:57:53.240513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.645 [2024-11-19 10:57:53.240682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.645 [2024-11-19 10:57:53.240689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.645 [2024-11-19 10:57:53.240695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.645 [2024-11-19 10:57:53.240702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.645 [2024-11-19 10:57:53.252392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.645 [2024-11-19 10:57:53.252804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-11-19 10:57:53.252820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.645 [2024-11-19 10:57:53.252827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.645 [2024-11-19 10:57:53.252993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.645 [2024-11-19 10:57:53.253160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.645 [2024-11-19 10:57:53.253168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.645 [2024-11-19 10:57:53.253174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.645 [2024-11-19 10:57:53.253180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.645 [2024-11-19 10:57:53.265269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.645 [2024-11-19 10:57:53.265623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-11-19 10:57:53.265667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.645 [2024-11-19 10:57:53.265690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.645 [2024-11-19 10:57:53.266282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.645 [2024-11-19 10:57:53.266795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.645 [2024-11-19 10:57:53.266802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.645 [2024-11-19 10:57:53.266809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.645 [2024-11-19 10:57:53.266815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.645 [2024-11-19 10:57:53.278082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.645 [2024-11-19 10:57:53.278540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-11-19 10:57:53.278585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.645 [2024-11-19 10:57:53.278608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.645 [2024-11-19 10:57:53.278999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.645 [2024-11-19 10:57:53.279166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.645 [2024-11-19 10:57:53.279174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.645 [2024-11-19 10:57:53.279181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.645 [2024-11-19 10:57:53.279187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.645 [2024-11-19 10:57:53.290892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.645 [2024-11-19 10:57:53.291350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.645 [2024-11-19 10:57:53.291367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.645 [2024-11-19 10:57:53.291374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.645 [2024-11-19 10:57:53.291542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.645 [2024-11-19 10:57:53.291700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.645 [2024-11-19 10:57:53.291708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.645 [2024-11-19 10:57:53.291714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.645 [2024-11-19 10:57:53.291720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.645 [2024-11-19 10:57:53.303640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.646 [2024-11-19 10:57:53.304009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-11-19 10:57:53.304025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.646 [2024-11-19 10:57:53.304032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.646 [2024-11-19 10:57:53.304189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.646 [2024-11-19 10:57:53.304375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.646 [2024-11-19 10:57:53.304383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.646 [2024-11-19 10:57:53.304389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.646 [2024-11-19 10:57:53.304396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.646 [2024-11-19 10:57:53.316456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.646 [2024-11-19 10:57:53.316850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-11-19 10:57:53.316866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.646 [2024-11-19 10:57:53.316872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.646 [2024-11-19 10:57:53.317030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.646 [2024-11-19 10:57:53.317188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.646 [2024-11-19 10:57:53.317198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.646 [2024-11-19 10:57:53.317210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.646 [2024-11-19 10:57:53.317216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.646 [2024-11-19 10:57:53.329247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.646 [2024-11-19 10:57:53.329637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-11-19 10:57:53.329652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.646 [2024-11-19 10:57:53.329659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.646 [2024-11-19 10:57:53.329817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.646 [2024-11-19 10:57:53.329974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.646 [2024-11-19 10:57:53.329981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.646 [2024-11-19 10:57:53.329987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.646 [2024-11-19 10:57:53.329993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.646 [2024-11-19 10:57:53.341966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.646 [2024-11-19 10:57:53.342377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-11-19 10:57:53.342393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.646 [2024-11-19 10:57:53.342400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.646 [2024-11-19 10:57:53.342558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.646 [2024-11-19 10:57:53.342716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.646 [2024-11-19 10:57:53.342723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.646 [2024-11-19 10:57:53.342729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.646 [2024-11-19 10:57:53.342735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.646 [2024-11-19 10:57:53.354698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.646 [2024-11-19 10:57:53.355131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-11-19 10:57:53.355147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.646 [2024-11-19 10:57:53.355154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.646 [2024-11-19 10:57:53.355325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.646 [2024-11-19 10:57:53.355493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.646 [2024-11-19 10:57:53.355501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.646 [2024-11-19 10:57:53.355508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.646 [2024-11-19 10:57:53.355514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.646 [2024-11-19 10:57:53.367496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.646 [2024-11-19 10:57:53.367883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-11-19 10:57:53.367899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.646 [2024-11-19 10:57:53.367905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.646 [2024-11-19 10:57:53.368063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.646 [2024-11-19 10:57:53.368226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.646 [2024-11-19 10:57:53.368250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.646 [2024-11-19 10:57:53.368257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.646 [2024-11-19 10:57:53.368264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.646 [2024-11-19 10:57:53.380302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.646 [2024-11-19 10:57:53.380722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-11-19 10:57:53.380738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.646 [2024-11-19 10:57:53.380744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.646 [2024-11-19 10:57:53.380902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.646 [2024-11-19 10:57:53.381060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.646 [2024-11-19 10:57:53.381067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.646 [2024-11-19 10:57:53.381073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.646 [2024-11-19 10:57:53.381079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.646 [2024-11-19 10:57:53.393023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.646 [2024-11-19 10:57:53.393456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-11-19 10:57:53.393473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.646 [2024-11-19 10:57:53.393480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.646 [2024-11-19 10:57:53.393651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.646 [2024-11-19 10:57:53.393823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.646 [2024-11-19 10:57:53.393832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.646 [2024-11-19 10:57:53.393838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.646 [2024-11-19 10:57:53.393844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.646 [2024-11-19 10:57:53.406046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.646 [2024-11-19 10:57:53.406433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-11-19 10:57:53.406453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.646 [2024-11-19 10:57:53.406461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.646 [2024-11-19 10:57:53.406632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.646 [2024-11-19 10:57:53.406806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.646 [2024-11-19 10:57:53.406815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.646 [2024-11-19 10:57:53.406821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.646 [2024-11-19 10:57:53.406827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.646 [2024-11-19 10:57:53.418914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.646 [2024-11-19 10:57:53.419334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.646 [2024-11-19 10:57:53.419350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.646 [2024-11-19 10:57:53.419357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.646 [2024-11-19 10:57:53.419525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.646 [2024-11-19 10:57:53.419692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.646 [2024-11-19 10:57:53.419700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.646 [2024-11-19 10:57:53.419707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.647 [2024-11-19 10:57:53.419712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.647 7612.25 IOPS, 29.74 MiB/s [2024-11-19T09:57:53.439Z] [2024-11-19 10:57:53.431982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.647 [2024-11-19 10:57:53.432439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.647 [2024-11-19 10:57:53.432455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.647 [2024-11-19 10:57:53.432463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.647 [2024-11-19 10:57:53.432636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.907 [2024-11-19 10:57:53.432807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.907 [2024-11-19 10:57:53.432816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.907 [2024-11-19 10:57:53.432822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.907 [2024-11-19 10:57:53.432828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.907 [2024-11-19 10:57:53.444852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.907 [2024-11-19 10:57:53.445320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.907 [2024-11-19 10:57:53.445336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.907 [2024-11-19 10:57:53.445343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.907 [2024-11-19 10:57:53.445514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.907 [2024-11-19 10:57:53.445683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.907 [2024-11-19 10:57:53.445691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.907 [2024-11-19 10:57:53.445697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.907 [2024-11-19 10:57:53.445703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.907 [2024-11-19 10:57:53.457656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.908 [2024-11-19 10:57:53.458081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.908 [2024-11-19 10:57:53.458124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.908 [2024-11-19 10:57:53.458147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.908 [2024-11-19 10:57:53.458742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.908 [2024-11-19 10:57:53.459256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.908 [2024-11-19 10:57:53.459264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.908 [2024-11-19 10:57:53.459271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.908 [2024-11-19 10:57:53.459277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.908 [2024-11-19 10:57:53.470510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.908 [2024-11-19 10:57:53.470931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.908 [2024-11-19 10:57:53.470946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.908 [2024-11-19 10:57:53.470953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.908 [2024-11-19 10:57:53.471110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.908 [2024-11-19 10:57:53.471292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.908 [2024-11-19 10:57:53.471300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.908 [2024-11-19 10:57:53.471307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.908 [2024-11-19 10:57:53.471313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.908 [2024-11-19 10:57:53.483293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.908 [2024-11-19 10:57:53.483743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.908 [2024-11-19 10:57:53.483786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.908 [2024-11-19 10:57:53.483809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.908 [2024-11-19 10:57:53.484272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.908 [2024-11-19 10:57:53.484445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.908 [2024-11-19 10:57:53.484453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.908 [2024-11-19 10:57:53.484462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.908 [2024-11-19 10:57:53.484480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.908 [2024-11-19 10:57:53.496017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.908 [2024-11-19 10:57:53.496413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.908 [2024-11-19 10:57:53.496429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.908 [2024-11-19 10:57:53.496437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.909 [2024-11-19 10:57:53.496604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.909 [2024-11-19 10:57:53.496770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.909 [2024-11-19 10:57:53.496778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.909 [2024-11-19 10:57:53.496784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.909 [2024-11-19 10:57:53.496790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.909 [2024-11-19 10:57:53.508850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.909 [2024-11-19 10:57:53.509269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.909 [2024-11-19 10:57:53.509285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.909 [2024-11-19 10:57:53.509292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.909 [2024-11-19 10:57:53.509450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.909 [2024-11-19 10:57:53.509608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.909 [2024-11-19 10:57:53.509616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.909 [2024-11-19 10:57:53.509621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.909 [2024-11-19 10:57:53.509627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.909 [2024-11-19 10:57:53.521712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.909 [2024-11-19 10:57:53.522112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.909 [2024-11-19 10:57:53.522128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.909 [2024-11-19 10:57:53.522135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.909 [2024-11-19 10:57:53.522318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.909 [2024-11-19 10:57:53.522485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.909 [2024-11-19 10:57:53.522493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.909 [2024-11-19 10:57:53.522500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.909 [2024-11-19 10:57:53.522506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.909 [2024-11-19 10:57:53.534427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.909 [2024-11-19 10:57:53.534848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.909 [2024-11-19 10:57:53.534863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.909 [2024-11-19 10:57:53.534870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.910 [2024-11-19 10:57:53.535028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.910 [2024-11-19 10:57:53.535187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.910 [2024-11-19 10:57:53.535194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.910 [2024-11-19 10:57:53.535200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.910 [2024-11-19 10:57:53.535212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.910 [2024-11-19 10:57:53.547148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.910 [2024-11-19 10:57:53.547570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.910 [2024-11-19 10:57:53.547586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.910 [2024-11-19 10:57:53.547593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.910 [2024-11-19 10:57:53.547750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.910 [2024-11-19 10:57:53.547908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.910 [2024-11-19 10:57:53.547916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.910 [2024-11-19 10:57:53.547922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.910 [2024-11-19 10:57:53.547928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.910 [2024-11-19 10:57:53.559957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.910 [2024-11-19 10:57:53.560380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.910 [2024-11-19 10:57:53.560425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.910 [2024-11-19 10:57:53.560448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.910 [2024-11-19 10:57:53.561027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.910 [2024-11-19 10:57:53.561470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.910 [2024-11-19 10:57:53.561479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.910 [2024-11-19 10:57:53.561485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.910 [2024-11-19 10:57:53.561491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.910 [2024-11-19 10:57:53.572716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.910 [2024-11-19 10:57:53.573141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.910 [2024-11-19 10:57:53.573160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.910 [2024-11-19 10:57:53.573167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.910 [2024-11-19 10:57:53.573351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.910 [2024-11-19 10:57:53.573518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.910 [2024-11-19 10:57:53.573526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.911 [2024-11-19 10:57:53.573532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.911 [2024-11-19 10:57:53.573538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.911 [2024-11-19 10:57:53.585427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.911 [2024-11-19 10:57:53.585828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.911 [2024-11-19 10:57:53.585843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.911 [2024-11-19 10:57:53.585850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.911 [2024-11-19 10:57:53.586007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.911 [2024-11-19 10:57:53.586165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.911 [2024-11-19 10:57:53.586173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.911 [2024-11-19 10:57:53.586179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.911 [2024-11-19 10:57:53.586185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.911 [2024-11-19 10:57:53.598264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.911 [2024-11-19 10:57:53.598606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.911 [2024-11-19 10:57:53.598622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.911 [2024-11-19 10:57:53.598628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.911 [2024-11-19 10:57:53.598786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.911 [2024-11-19 10:57:53.598944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.911 [2024-11-19 10:57:53.598951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.911 [2024-11-19 10:57:53.598958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.911 [2024-11-19 10:57:53.598963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.911 [2024-11-19 10:57:53.611068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.911 [2024-11-19 10:57:53.611504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.911 [2024-11-19 10:57:53.611520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.911 [2024-11-19 10:57:53.611527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.911 [2024-11-19 10:57:53.611694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.911 [2024-11-19 10:57:53.611864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.911 [2024-11-19 10:57:53.611872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.911 [2024-11-19 10:57:53.611878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.911 [2024-11-19 10:57:53.611884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.911 [2024-11-19 10:57:53.623893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.911 [2024-11-19 10:57:53.624314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.911 [2024-11-19 10:57:53.624360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.911 [2024-11-19 10:57:53.624382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.911 [2024-11-19 10:57:53.624961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.911 [2024-11-19 10:57:53.625516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.911 [2024-11-19 10:57:53.625524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.911 [2024-11-19 10:57:53.625530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.911 [2024-11-19 10:57:53.625536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.911 [2024-11-19 10:57:53.636609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.911 [2024-11-19 10:57:53.637026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.911 [2024-11-19 10:57:53.637070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.911 [2024-11-19 10:57:53.637093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.911 [2024-11-19 10:57:53.637671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.911 [2024-11-19 10:57:53.637840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.911 [2024-11-19 10:57:53.637848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.911 [2024-11-19 10:57:53.637854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.911 [2024-11-19 10:57:53.637860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.911 [2024-11-19 10:57:53.649401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.911 [2024-11-19 10:57:53.649861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.912 [2024-11-19 10:57:53.649877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.912 [2024-11-19 10:57:53.649884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.912 [2024-11-19 10:57:53.650051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.912 [2024-11-19 10:57:53.650224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.912 [2024-11-19 10:57:53.650249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.912 [2024-11-19 10:57:53.650259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.912 [2024-11-19 10:57:53.650266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.912 [2024-11-19 10:57:53.662468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.912 [2024-11-19 10:57:53.662899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.912 [2024-11-19 10:57:53.662915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.912 [2024-11-19 10:57:53.662922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.912 [2024-11-19 10:57:53.663093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.912 [2024-11-19 10:57:53.663270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.912 [2024-11-19 10:57:53.663279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.912 [2024-11-19 10:57:53.663285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.912 [2024-11-19 10:57:53.663292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.912 [2024-11-19 10:57:53.675183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.912 [2024-11-19 10:57:53.675635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.912 [2024-11-19 10:57:53.675651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.912 [2024-11-19 10:57:53.675657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.912 [2024-11-19 10:57:53.675815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.912 [2024-11-19 10:57:53.675973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.912 [2024-11-19 10:57:53.675981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.912 [2024-11-19 10:57:53.675987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.912 [2024-11-19 10:57:53.675992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.912 [2024-11-19 10:57:53.687901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.912 [2024-11-19 10:57:53.688242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.912 [2024-11-19 10:57:53.688259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:03.912 [2024-11-19 10:57:53.688266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:03.912 [2024-11-19 10:57:53.688423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:03.912 [2024-11-19 10:57:53.688581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.912 [2024-11-19 10:57:53.688589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.912 [2024-11-19 10:57:53.688594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.912 [2024-11-19 10:57:53.688600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.172 [2024-11-19 10:57:53.700745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.172 [2024-11-19 10:57:53.701153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-11-19 10:57:53.701169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.172 [2024-11-19 10:57:53.701175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.172 [2024-11-19 10:57:53.701361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.172 [2024-11-19 10:57:53.701527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.172 [2024-11-19 10:57:53.701535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.172 [2024-11-19 10:57:53.701542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.172 [2024-11-19 10:57:53.701547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.172 [2024-11-19 10:57:53.713565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.172 [2024-11-19 10:57:53.713984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-11-19 10:57:53.713999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.172 [2024-11-19 10:57:53.714005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.172 [2024-11-19 10:57:53.714163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.172 [2024-11-19 10:57:53.714348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.172 [2024-11-19 10:57:53.714357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.172 [2024-11-19 10:57:53.714363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.172 [2024-11-19 10:57:53.714369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.172 [2024-11-19 10:57:53.726381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.172 [2024-11-19 10:57:53.726805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-11-19 10:57:53.726850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.172 [2024-11-19 10:57:53.726874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.172 [2024-11-19 10:57:53.727469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.172 [2024-11-19 10:57:53.727895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.172 [2024-11-19 10:57:53.727903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.172 [2024-11-19 10:57:53.727909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.172 [2024-11-19 10:57:53.727916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.172 [2024-11-19 10:57:53.739176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.172 [2024-11-19 10:57:53.739602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-11-19 10:57:53.739617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.172 [2024-11-19 10:57:53.739627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.172 [2024-11-19 10:57:53.739785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.172 [2024-11-19 10:57:53.739944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.172 [2024-11-19 10:57:53.739951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.172 [2024-11-19 10:57:53.739957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.172 [2024-11-19 10:57:53.739963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.172 [2024-11-19 10:57:53.752017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.172 [2024-11-19 10:57:53.752433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-11-19 10:57:53.752450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.172 [2024-11-19 10:57:53.752457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.172 [2024-11-19 10:57:53.752624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.172 [2024-11-19 10:57:53.752790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.172 [2024-11-19 10:57:53.752798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.172 [2024-11-19 10:57:53.752804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.172 [2024-11-19 10:57:53.752810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.172 [2024-11-19 10:57:53.764740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.172 [2024-11-19 10:57:53.765152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-11-19 10:57:53.765167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.172 [2024-11-19 10:57:53.765174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.172 [2024-11-19 10:57:53.765360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.172 [2024-11-19 10:57:53.765531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.172 [2024-11-19 10:57:53.765540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.172 [2024-11-19 10:57:53.765546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.172 [2024-11-19 10:57:53.765552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.173 [2024-11-19 10:57:53.777454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.173 [2024-11-19 10:57:53.777875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-11-19 10:57:53.777891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.173 [2024-11-19 10:57:53.777898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.173 [2024-11-19 10:57:53.778056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.173 [2024-11-19 10:57:53.778223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.173 [2024-11-19 10:57:53.778248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.173 [2024-11-19 10:57:53.778254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.173 [2024-11-19 10:57:53.778261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.173 [2024-11-19 10:57:53.790159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.173 [2024-11-19 10:57:53.790575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-11-19 10:57:53.790592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.173 [2024-11-19 10:57:53.790599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.173 [2024-11-19 10:57:53.790765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.173 [2024-11-19 10:57:53.790932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.173 [2024-11-19 10:57:53.790940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.173 [2024-11-19 10:57:53.790946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.173 [2024-11-19 10:57:53.790952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.173 [2024-11-19 10:57:53.802982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.173 [2024-11-19 10:57:53.803368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-11-19 10:57:53.803385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.173 [2024-11-19 10:57:53.803392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.173 [2024-11-19 10:57:53.803550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.173 [2024-11-19 10:57:53.803708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.173 [2024-11-19 10:57:53.803715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.173 [2024-11-19 10:57:53.803721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.173 [2024-11-19 10:57:53.803727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.173 [2024-11-19 10:57:53.815836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.173 [2024-11-19 10:57:53.816227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-11-19 10:57:53.816244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.173 [2024-11-19 10:57:53.816251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.173 [2024-11-19 10:57:53.816409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.173 [2024-11-19 10:57:53.816567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.173 [2024-11-19 10:57:53.816574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.173 [2024-11-19 10:57:53.816583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.173 [2024-11-19 10:57:53.816590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.173 [2024-11-19 10:57:53.828547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.173 [2024-11-19 10:57:53.828937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-11-19 10:57:53.828953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.173 [2024-11-19 10:57:53.828959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.173 [2024-11-19 10:57:53.829117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.173 [2024-11-19 10:57:53.829299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.173 [2024-11-19 10:57:53.829308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.173 [2024-11-19 10:57:53.829314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.173 [2024-11-19 10:57:53.829320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.173 [2024-11-19 10:57:53.841400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.173 [2024-11-19 10:57:53.841816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-11-19 10:57:53.841831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.173 [2024-11-19 10:57:53.841838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.173 [2024-11-19 10:57:53.841996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.173 [2024-11-19 10:57:53.842154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.173 [2024-11-19 10:57:53.842162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.173 [2024-11-19 10:57:53.842168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.173 [2024-11-19 10:57:53.842174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.173 [2024-11-19 10:57:53.854212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.173 [2024-11-19 10:57:53.854578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-11-19 10:57:53.854594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.173 [2024-11-19 10:57:53.854601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.173 [2024-11-19 10:57:53.854769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.173 [2024-11-19 10:57:53.854936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.173 [2024-11-19 10:57:53.854944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.173 [2024-11-19 10:57:53.854951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.173 [2024-11-19 10:57:53.854957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.173 [2024-11-19 10:57:53.867054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.173 [2024-11-19 10:57:53.867504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-11-19 10:57:53.867549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.173 [2024-11-19 10:57:53.867572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.173 [2024-11-19 10:57:53.868110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.173 [2024-11-19 10:57:53.868282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.173 [2024-11-19 10:57:53.868291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.173 [2024-11-19 10:57:53.868297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.173 [2024-11-19 10:57:53.868303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.173 [2024-11-19 10:57:53.879813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.173 [2024-11-19 10:57:53.880232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-11-19 10:57:53.880249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.173 [2024-11-19 10:57:53.880256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.173 [2024-11-19 10:57:53.880423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.173 [2024-11-19 10:57:53.880594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.173 [2024-11-19 10:57:53.880602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.173 [2024-11-19 10:57:53.880608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.173 [2024-11-19 10:57:53.880615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.173 [2024-11-19 10:57:53.892763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.173 [2024-11-19 10:57:53.893107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-11-19 10:57:53.893123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.173 [2024-11-19 10:57:53.893131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.173 [2024-11-19 10:57:53.893303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.173 [2024-11-19 10:57:53.893471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.173 [2024-11-19 10:57:53.893480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.174 [2024-11-19 10:57:53.893486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.174 [2024-11-19 10:57:53.893492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.174 [2024-11-19 10:57:53.905584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.174 [2024-11-19 10:57:53.906000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-11-19 10:57:53.906017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.174 [2024-11-19 10:57:53.906028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.174 [2024-11-19 10:57:53.906200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.174 [2024-11-19 10:57:53.906380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.174 [2024-11-19 10:57:53.906389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.174 [2024-11-19 10:57:53.906395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.174 [2024-11-19 10:57:53.906402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.174 [2024-11-19 10:57:53.918582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.174 [2024-11-19 10:57:53.919019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-11-19 10:57:53.919036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.174 [2024-11-19 10:57:53.919044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.174 [2024-11-19 10:57:53.919222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.174 [2024-11-19 10:57:53.919397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.174 [2024-11-19 10:57:53.919405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.174 [2024-11-19 10:57:53.919411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.174 [2024-11-19 10:57:53.919418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.174 [2024-11-19 10:57:53.931527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.174 [2024-11-19 10:57:53.931975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-11-19 10:57:53.932019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.174 [2024-11-19 10:57:53.932042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.174 [2024-11-19 10:57:53.932486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.174 [2024-11-19 10:57:53.932655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.174 [2024-11-19 10:57:53.932663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.174 [2024-11-19 10:57:53.932669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.174 [2024-11-19 10:57:53.932675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.174 [2024-11-19 10:57:53.944261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.174 [2024-11-19 10:57:53.944694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-11-19 10:57:53.944710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.174 [2024-11-19 10:57:53.944717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.174 [2024-11-19 10:57:53.944885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.174 [2024-11-19 10:57:53.945055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.174 [2024-11-19 10:57:53.945063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.174 [2024-11-19 10:57:53.945069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.174 [2024-11-19 10:57:53.945075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.174 [2024-11-19 10:57:53.957263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.174 [2024-11-19 10:57:53.957667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-11-19 10:57:53.957683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.174 [2024-11-19 10:57:53.957690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.174 [2024-11-19 10:57:53.957861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.174 [2024-11-19 10:57:53.958034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.174 [2024-11-19 10:57:53.958042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.174 [2024-11-19 10:57:53.958048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.174 [2024-11-19 10:57:53.958055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.434 [2024-11-19 10:57:53.970099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.434 [2024-11-19 10:57:53.970517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.434 [2024-11-19 10:57:53.970533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.434 [2024-11-19 10:57:53.970540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.434 [2024-11-19 10:57:53.970706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.434 [2024-11-19 10:57:53.970873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.434 [2024-11-19 10:57:53.970881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.434 [2024-11-19 10:57:53.970887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.434 [2024-11-19 10:57:53.970893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.434 [2024-11-19 10:57:53.982822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.434 [2024-11-19 10:57:53.983197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.434 [2024-11-19 10:57:53.983254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.434 [2024-11-19 10:57:53.983277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.434 [2024-11-19 10:57:53.983855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.434 [2024-11-19 10:57:53.984357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.434 [2024-11-19 10:57:53.984365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.435 [2024-11-19 10:57:53.984374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.435 [2024-11-19 10:57:53.984381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.435 [2024-11-19 10:57:53.995609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.435 [2024-11-19 10:57:53.995995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.435 [2024-11-19 10:57:53.996010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.435 [2024-11-19 10:57:53.996017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.435 [2024-11-19 10:57:53.996175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.435 [2024-11-19 10:57:53.996360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.435 [2024-11-19 10:57:53.996368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.435 [2024-11-19 10:57:53.996374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.435 [2024-11-19 10:57:53.996380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.435 [2024-11-19 10:57:54.008448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.435 [2024-11-19 10:57:54.008877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.435 [2024-11-19 10:57:54.008893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.435 [2024-11-19 10:57:54.008900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.435 [2024-11-19 10:57:54.009067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.435 [2024-11-19 10:57:54.009240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.435 [2024-11-19 10:57:54.009249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.435 [2024-11-19 10:57:54.009255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.435 [2024-11-19 10:57:54.009261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.435 [2024-11-19 10:57:54.021236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.435 [2024-11-19 10:57:54.021620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.435 [2024-11-19 10:57:54.021637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.435 [2024-11-19 10:57:54.021644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.435 [2024-11-19 10:57:54.021809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.435 [2024-11-19 10:57:54.021976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.435 [2024-11-19 10:57:54.021984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.435 [2024-11-19 10:57:54.021990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.435 [2024-11-19 10:57:54.021996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.435 [2024-11-19 10:57:54.034002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.435 [2024-11-19 10:57:54.034425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.435 [2024-11-19 10:57:54.034440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.435 [2024-11-19 10:57:54.034447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.435 [2024-11-19 10:57:54.034614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.435 [2024-11-19 10:57:54.034785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.435 [2024-11-19 10:57:54.034793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.435 [2024-11-19 10:57:54.034799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.435 [2024-11-19 10:57:54.034805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.435 [2024-11-19 10:57:54.046776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.435 [2024-11-19 10:57:54.047166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.435 [2024-11-19 10:57:54.047182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.435 [2024-11-19 10:57:54.047188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.435 [2024-11-19 10:57:54.047375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.435 [2024-11-19 10:57:54.047543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.435 [2024-11-19 10:57:54.047551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.435 [2024-11-19 10:57:54.047557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.435 [2024-11-19 10:57:54.047563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.435 [2024-11-19 10:57:54.059597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.435 [2024-11-19 10:57:54.059985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.435 [2024-11-19 10:57:54.060002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.435 [2024-11-19 10:57:54.060009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.435 [2024-11-19 10:57:54.060175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.435 [2024-11-19 10:57:54.060352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.435 [2024-11-19 10:57:54.060361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.435 [2024-11-19 10:57:54.060367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.435 [2024-11-19 10:57:54.060373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.435 [2024-11-19 10:57:54.072336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.435 [2024-11-19 10:57:54.072701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.435 [2024-11-19 10:57:54.072717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.435 [2024-11-19 10:57:54.072726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.435 [2024-11-19 10:57:54.072884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.435 [2024-11-19 10:57:54.073041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.435 [2024-11-19 10:57:54.073049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.435 [2024-11-19 10:57:54.073055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.435 [2024-11-19 10:57:54.073060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.435 [2024-11-19 10:57:54.085149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.435 [2024-11-19 10:57:54.085547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.435 [2024-11-19 10:57:54.085563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.435 [2024-11-19 10:57:54.085570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.435 [2024-11-19 10:57:54.085736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.435 [2024-11-19 10:57:54.085904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.435 [2024-11-19 10:57:54.085912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.435 [2024-11-19 10:57:54.085918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.435 [2024-11-19 10:57:54.085925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.435 [2024-11-19 10:57:54.097862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.435 [2024-11-19 10:57:54.098267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.435 [2024-11-19 10:57:54.098311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.435 [2024-11-19 10:57:54.098334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.435 [2024-11-19 10:57:54.098794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.435 [2024-11-19 10:57:54.098952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.435 [2024-11-19 10:57:54.098959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.435 [2024-11-19 10:57:54.098965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.435 [2024-11-19 10:57:54.098971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.435 [2024-11-19 10:57:54.110691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.435 [2024-11-19 10:57:54.111098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.435 [2024-11-19 10:57:54.111113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.436 [2024-11-19 10:57:54.111120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.436 [2024-11-19 10:57:54.111293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.436 [2024-11-19 10:57:54.111463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.436 [2024-11-19 10:57:54.111471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.436 [2024-11-19 10:57:54.111477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.436 [2024-11-19 10:57:54.111483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.436 [2024-11-19 10:57:54.123488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.436 [2024-11-19 10:57:54.123896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.436 [2024-11-19 10:57:54.123911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.436 [2024-11-19 10:57:54.123918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.436 [2024-11-19 10:57:54.124088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.436 [2024-11-19 10:57:54.124264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.436 [2024-11-19 10:57:54.124272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.436 [2024-11-19 10:57:54.124278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.436 [2024-11-19 10:57:54.124284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.436 [2024-11-19 10:57:54.136204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.436 [2024-11-19 10:57:54.136598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.436 [2024-11-19 10:57:54.136614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.436 [2024-11-19 10:57:54.136620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.436 [2024-11-19 10:57:54.136777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.436 [2024-11-19 10:57:54.136935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.436 [2024-11-19 10:57:54.136943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.436 [2024-11-19 10:57:54.136949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.436 [2024-11-19 10:57:54.136954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.436 [2024-11-19 10:57:54.149015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.436 [2024-11-19 10:57:54.149428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.436 [2024-11-19 10:57:54.149445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.436 [2024-11-19 10:57:54.149452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.436 [2024-11-19 10:57:54.149618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.436 [2024-11-19 10:57:54.149787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.436 [2024-11-19 10:57:54.149794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.436 [2024-11-19 10:57:54.149800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.436 [2024-11-19 10:57:54.149810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.436 [2024-11-19 10:57:54.161838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.436 [2024-11-19 10:57:54.162266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.436 [2024-11-19 10:57:54.162283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.436 [2024-11-19 10:57:54.162290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.436 [2024-11-19 10:57:54.162463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.436 [2024-11-19 10:57:54.162636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.436 [2024-11-19 10:57:54.162644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.436 [2024-11-19 10:57:54.162651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.436 [2024-11-19 10:57:54.162657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.436 [2024-11-19 10:57:54.174877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.436 [2024-11-19 10:57:54.175246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.436 [2024-11-19 10:57:54.175263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.436 [2024-11-19 10:57:54.175271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.436 [2024-11-19 10:57:54.175442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.436 [2024-11-19 10:57:54.175616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.436 [2024-11-19 10:57:54.175624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.436 [2024-11-19 10:57:54.175631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.436 [2024-11-19 10:57:54.175637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.436 [2024-11-19 10:57:54.187937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.436 [2024-11-19 10:57:54.188339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.436 [2024-11-19 10:57:54.188356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.436 [2024-11-19 10:57:54.188363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.436 [2024-11-19 10:57:54.188534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.436 [2024-11-19 10:57:54.188706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.436 [2024-11-19 10:57:54.188714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.436 [2024-11-19 10:57:54.188720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.436 [2024-11-19 10:57:54.188727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.436 [2024-11-19 10:57:54.201033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.436 [2024-11-19 10:57:54.201450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.436 [2024-11-19 10:57:54.201468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.436 [2024-11-19 10:57:54.201475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.436 [2024-11-19 10:57:54.201647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.436 [2024-11-19 10:57:54.201819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.436 [2024-11-19 10:57:54.201827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.436 [2024-11-19 10:57:54.201834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.436 [2024-11-19 10:57:54.201840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.436 [2024-11-19 10:57:54.213958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.436 [2024-11-19 10:57:54.214359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.436 [2024-11-19 10:57:54.214376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.436 [2024-11-19 10:57:54.214384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.436 [2024-11-19 10:57:54.214555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.436 [2024-11-19 10:57:54.214728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.436 [2024-11-19 10:57:54.214736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.436 [2024-11-19 10:57:54.214742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.436 [2024-11-19 10:57:54.214749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.698 [2024-11-19 10:57:54.226973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.698 [2024-11-19 10:57:54.227376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.698 [2024-11-19 10:57:54.227393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.698 [2024-11-19 10:57:54.227400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.698 [2024-11-19 10:57:54.227571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.698 [2024-11-19 10:57:54.227743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.698 [2024-11-19 10:57:54.227751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.698 [2024-11-19 10:57:54.227758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.698 [2024-11-19 10:57:54.227764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.698 [2024-11-19 10:57:54.239799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.698 [2024-11-19 10:57:54.240216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.698 [2024-11-19 10:57:54.240233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.698 [2024-11-19 10:57:54.240240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.698 [2024-11-19 10:57:54.240410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.698 [2024-11-19 10:57:54.240579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.698 [2024-11-19 10:57:54.240587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.698 [2024-11-19 10:57:54.240593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.698 [2024-11-19 10:57:54.240598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.698 [2024-11-19 10:57:54.252684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.698 [2024-11-19 10:57:54.253099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.698 [2024-11-19 10:57:54.253115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.698 [2024-11-19 10:57:54.253122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.698 [2024-11-19 10:57:54.253295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.698 [2024-11-19 10:57:54.253462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.698 [2024-11-19 10:57:54.253470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.698 [2024-11-19 10:57:54.253476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.698 [2024-11-19 10:57:54.253482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.698 [2024-11-19 10:57:54.265538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.698 [2024-11-19 10:57:54.265952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.698 [2024-11-19 10:57:54.265969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.698 [2024-11-19 10:57:54.265976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.698 [2024-11-19 10:57:54.266142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.698 [2024-11-19 10:57:54.266319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.698 [2024-11-19 10:57:54.266329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.698 [2024-11-19 10:57:54.266335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.698 [2024-11-19 10:57:54.266341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.698 [2024-11-19 10:57:54.278470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.698 [2024-11-19 10:57:54.278891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.698 [2024-11-19 10:57:54.278907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.698 [2024-11-19 10:57:54.278914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.698 [2024-11-19 10:57:54.279081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.698 [2024-11-19 10:57:54.279255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.698 [2024-11-19 10:57:54.279267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.698 [2024-11-19 10:57:54.279273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.698 [2024-11-19 10:57:54.279279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.698 [2024-11-19 10:57:54.291256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.698 [2024-11-19 10:57:54.291618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.698 [2024-11-19 10:57:54.291634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.698 [2024-11-19 10:57:54.291641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.698 [2024-11-19 10:57:54.291808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.698 [2024-11-19 10:57:54.291974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.698 [2024-11-19 10:57:54.291982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.699 [2024-11-19 10:57:54.291989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.699 [2024-11-19 10:57:54.291995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.699 [2024-11-19 10:57:54.304158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.699 [2024-11-19 10:57:54.304532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.699 [2024-11-19 10:57:54.304549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.699 [2024-11-19 10:57:54.304558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.699 [2024-11-19 10:57:54.304727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.699 [2024-11-19 10:57:54.304895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.699 [2024-11-19 10:57:54.304903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.699 [2024-11-19 10:57:54.304910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.699 [2024-11-19 10:57:54.304915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.699 [2024-11-19 10:57:54.317053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.699 [2024-11-19 10:57:54.317449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.699 [2024-11-19 10:57:54.317466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.699 [2024-11-19 10:57:54.317473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.699 [2024-11-19 10:57:54.317639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.699 [2024-11-19 10:57:54.317807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.699 [2024-11-19 10:57:54.317816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.699 [2024-11-19 10:57:54.317822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.699 [2024-11-19 10:57:54.317831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.699 [2024-11-19 10:57:54.330166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.699 [2024-11-19 10:57:54.330617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.699 [2024-11-19 10:57:54.330634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.699 [2024-11-19 10:57:54.330641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.699 [2024-11-19 10:57:54.330812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.699 [2024-11-19 10:57:54.330985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.699 [2024-11-19 10:57:54.330993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.699 [2024-11-19 10:57:54.331000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.699 [2024-11-19 10:57:54.331006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.699 [2024-11-19 10:57:54.343038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.699 [2024-11-19 10:57:54.343454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.699 [2024-11-19 10:57:54.343471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.699 [2024-11-19 10:57:54.343478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.699 [2024-11-19 10:57:54.343645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.699 [2024-11-19 10:57:54.343812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.699 [2024-11-19 10:57:54.343820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.699 [2024-11-19 10:57:54.343827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.699 [2024-11-19 10:57:54.343833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.699 [2024-11-19 10:57:54.355911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.699 [2024-11-19 10:57:54.356317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.699 [2024-11-19 10:57:54.356335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.699 [2024-11-19 10:57:54.356342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.699 [2024-11-19 10:57:54.356508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.699 [2024-11-19 10:57:54.356675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.699 [2024-11-19 10:57:54.356682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.699 [2024-11-19 10:57:54.356689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.699 [2024-11-19 10:57:54.356695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.699 [2024-11-19 10:57:54.368813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.699 [2024-11-19 10:57:54.369168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.699 [2024-11-19 10:57:54.369184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.699 [2024-11-19 10:57:54.369191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.699 [2024-11-19 10:57:54.369363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.699 [2024-11-19 10:57:54.369531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.699 [2024-11-19 10:57:54.369539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.699 [2024-11-19 10:57:54.369545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.699 [2024-11-19 10:57:54.369551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.699 [2024-11-19 10:57:54.381685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.699 [2024-11-19 10:57:54.382131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.699 [2024-11-19 10:57:54.382174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.699 [2024-11-19 10:57:54.382197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.699 [2024-11-19 10:57:54.382642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.699 [2024-11-19 10:57:54.382810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.699 [2024-11-19 10:57:54.382818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.699 [2024-11-19 10:57:54.382824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.699 [2024-11-19 10:57:54.382830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.699 [2024-11-19 10:57:54.394518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.699 [2024-11-19 10:57:54.394854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.699 [2024-11-19 10:57:54.394898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.699 [2024-11-19 10:57:54.394921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.699 [2024-11-19 10:57:54.395396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.699 [2024-11-19 10:57:54.395565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.699 [2024-11-19 10:57:54.395573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.699 [2024-11-19 10:57:54.395579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.699 [2024-11-19 10:57:54.395585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.699 [2024-11-19 10:57:54.407334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.699 [2024-11-19 10:57:54.407685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.699 [2024-11-19 10:57:54.407701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.699 [2024-11-19 10:57:54.407708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.699 [2024-11-19 10:57:54.407878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.699 [2024-11-19 10:57:54.408045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.699 [2024-11-19 10:57:54.408053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.699 [2024-11-19 10:57:54.408059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.699 [2024-11-19 10:57:54.408066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.699 [2024-11-19 10:57:54.420269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.699 [2024-11-19 10:57:54.420640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.699 [2024-11-19 10:57:54.420657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.699 [2024-11-19 10:57:54.420664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.700 [2024-11-19 10:57:54.420837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.700 [2024-11-19 10:57:54.421009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.700 [2024-11-19 10:57:54.421017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.700 [2024-11-19 10:57:54.421024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.700 [2024-11-19 10:57:54.421030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.700 6089.80 IOPS, 23.79 MiB/s [2024-11-19T09:57:54.492Z] [2024-11-19 10:57:54.433347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.700 [2024-11-19 10:57:54.433641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.700 [2024-11-19 10:57:54.433657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.700 [2024-11-19 10:57:54.433664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.700 [2024-11-19 10:57:54.433836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.700 [2024-11-19 10:57:54.434008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.700 [2024-11-19 10:57:54.434017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.700 [2024-11-19 10:57:54.434023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.700 [2024-11-19 10:57:54.434030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.700 [2024-11-19 10:57:54.446241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.700 [2024-11-19 10:57:54.446550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.700 [2024-11-19 10:57:54.446568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.700 [2024-11-19 10:57:54.446575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.700 [2024-11-19 10:57:54.446742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.700 [2024-11-19 10:57:54.446910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.700 [2024-11-19 10:57:54.446921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.700 [2024-11-19 10:57:54.446928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.700 [2024-11-19 10:57:54.446934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.700 [2024-11-19 10:57:54.459167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.700 [2024-11-19 10:57:54.459548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.700 [2024-11-19 10:57:54.459593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.700 [2024-11-19 10:57:54.459615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.700 [2024-11-19 10:57:54.460193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.700 [2024-11-19 10:57:54.460651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.700 [2024-11-19 10:57:54.460659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.700 [2024-11-19 10:57:54.460665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.700 [2024-11-19 10:57:54.460671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.700 [2024-11-19 10:57:54.471944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.700 [2024-11-19 10:57:54.472367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.700 [2024-11-19 10:57:54.472412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.700 [2024-11-19 10:57:54.472436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.700 [2024-11-19 10:57:54.473014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.700 [2024-11-19 10:57:54.473534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.700 [2024-11-19 10:57:54.473543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.700 [2024-11-19 10:57:54.473549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.700 [2024-11-19 10:57:54.473556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.700 [2024-11-19 10:57:54.484911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.700 [2024-11-19 10:57:54.485296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.700 [2024-11-19 10:57:54.485313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.700 [2024-11-19 10:57:54.485320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.700 [2024-11-19 10:57:54.485492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.960 [2024-11-19 10:57:54.485664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.960 [2024-11-19 10:57:54.485673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.960 [2024-11-19 10:57:54.485681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.960 [2024-11-19 10:57:54.485693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.960 [2024-11-19 10:57:54.497830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.960 [2024-11-19 10:57:54.498292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-19 10:57:54.498338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.960 [2024-11-19 10:57:54.498361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.960 [2024-11-19 10:57:54.498623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.960 [2024-11-19 10:57:54.498791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.960 [2024-11-19 10:57:54.498798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.960 [2024-11-19 10:57:54.498804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.960 [2024-11-19 10:57:54.498811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.961 [2024-11-19 10:57:54.510711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.961 [2024-11-19 10:57:54.511094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-19 10:57:54.511110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.961 [2024-11-19 10:57:54.511117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.961 [2024-11-19 10:57:54.511289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.961 [2024-11-19 10:57:54.511456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.961 [2024-11-19 10:57:54.511465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.961 [2024-11-19 10:57:54.511471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.961 [2024-11-19 10:57:54.511477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.961 [2024-11-19 10:57:54.523552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.961 [2024-11-19 10:57:54.523975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-19 10:57:54.523991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.961 [2024-11-19 10:57:54.523998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.961 [2024-11-19 10:57:54.524164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.961 [2024-11-19 10:57:54.524336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.961 [2024-11-19 10:57:54.524344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.961 [2024-11-19 10:57:54.524350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.961 [2024-11-19 10:57:54.524357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.961 [2024-11-19 10:57:54.536460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.961 [2024-11-19 10:57:54.536795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-19 10:57:54.536810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.961 [2024-11-19 10:57:54.536817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.961 [2024-11-19 10:57:54.536984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.961 [2024-11-19 10:57:54.537154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.961 [2024-11-19 10:57:54.537162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.961 [2024-11-19 10:57:54.537168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.961 [2024-11-19 10:57:54.537174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.961 [2024-11-19 10:57:54.549209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.961 [2024-11-19 10:57:54.549571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-19 10:57:54.549586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.961 [2024-11-19 10:57:54.549593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.961 [2024-11-19 10:57:54.549759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.961 [2024-11-19 10:57:54.549925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.961 [2024-11-19 10:57:54.549933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.961 [2024-11-19 10:57:54.549939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.961 [2024-11-19 10:57:54.549945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.961 [2024-11-19 10:57:54.562013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.961 [2024-11-19 10:57:54.562354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-19 10:57:54.562370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.961 [2024-11-19 10:57:54.562377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.961 [2024-11-19 10:57:54.562544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.961 [2024-11-19 10:57:54.562710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.961 [2024-11-19 10:57:54.562718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.961 [2024-11-19 10:57:54.562724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.961 [2024-11-19 10:57:54.562730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.961 [2024-11-19 10:57:54.574863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.961 [2024-11-19 10:57:54.575277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-19 10:57:54.575328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.961 [2024-11-19 10:57:54.575351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.961 [2024-11-19 10:57:54.575903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.961 [2024-11-19 10:57:54.576071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.961 [2024-11-19 10:57:54.576080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.961 [2024-11-19 10:57:54.576086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.961 [2024-11-19 10:57:54.576092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.961 [2024-11-19 10:57:54.587669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.961 [2024-11-19 10:57:54.588068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-19 10:57:54.588085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.961 [2024-11-19 10:57:54.588092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.961 [2024-11-19 10:57:54.588265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.961 [2024-11-19 10:57:54.588433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.961 [2024-11-19 10:57:54.588441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.961 [2024-11-19 10:57:54.588447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.961 [2024-11-19 10:57:54.588453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.961 [2024-11-19 10:57:54.600433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.961 [2024-11-19 10:57:54.600791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-19 10:57:54.600807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.961 [2024-11-19 10:57:54.600814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.961 [2024-11-19 10:57:54.600981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.961 [2024-11-19 10:57:54.601148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.961 [2024-11-19 10:57:54.601156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.961 [2024-11-19 10:57:54.601162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.961 [2024-11-19 10:57:54.601169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.961 [2024-11-19 10:57:54.613207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.961 [2024-11-19 10:57:54.613504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-19 10:57:54.613520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.961 [2024-11-19 10:57:54.613526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.961 [2024-11-19 10:57:54.613693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.961 [2024-11-19 10:57:54.613863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.961 [2024-11-19 10:57:54.613874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.961 [2024-11-19 10:57:54.613880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.961 [2024-11-19 10:57:54.613886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.961 [2024-11-19 10:57:54.626137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.961 [2024-11-19 10:57:54.626515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-19 10:57:54.626531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.961 [2024-11-19 10:57:54.626539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.962 [2024-11-19 10:57:54.626709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.962 [2024-11-19 10:57:54.626882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.962 [2024-11-19 10:57:54.626890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.962 [2024-11-19 10:57:54.626896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.962 [2024-11-19 10:57:54.626903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.962 [2024-11-19 10:57:54.639059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.962 [2024-11-19 10:57:54.639353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-19 10:57:54.639369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.962 [2024-11-19 10:57:54.639376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.962 [2024-11-19 10:57:54.639542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.962 [2024-11-19 10:57:54.639709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.962 [2024-11-19 10:57:54.639718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.962 [2024-11-19 10:57:54.639724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.962 [2024-11-19 10:57:54.639730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.962 [2024-11-19 10:57:54.651908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.962 [2024-11-19 10:57:54.652259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-19 10:57:54.652276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.962 [2024-11-19 10:57:54.652284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.962 [2024-11-19 10:57:54.652450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.962 [2024-11-19 10:57:54.652616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.962 [2024-11-19 10:57:54.652625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.962 [2024-11-19 10:57:54.652631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.962 [2024-11-19 10:57:54.652641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.962 [2024-11-19 10:57:54.664722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.962 [2024-11-19 10:57:54.665139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-19 10:57:54.665192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.962 [2024-11-19 10:57:54.665228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.962 [2024-11-19 10:57:54.665777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.962 [2024-11-19 10:57:54.665944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.962 [2024-11-19 10:57:54.665952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.962 [2024-11-19 10:57:54.665958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.962 [2024-11-19 10:57:54.665964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.962 [2024-11-19 10:57:54.677582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.962 [2024-11-19 10:57:54.678002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-19 10:57:54.678019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.962 [2024-11-19 10:57:54.678026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.962 [2024-11-19 10:57:54.678198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.962 [2024-11-19 10:57:54.678376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.962 [2024-11-19 10:57:54.678384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.962 [2024-11-19 10:57:54.678390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.962 [2024-11-19 10:57:54.678397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.962 [2024-11-19 10:57:54.690589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.962 [2024-11-19 10:57:54.691051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-19 10:57:54.691067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.962 [2024-11-19 10:57:54.691075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.962 [2024-11-19 10:57:54.691251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.962 [2024-11-19 10:57:54.691424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.962 [2024-11-19 10:57:54.691432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.962 [2024-11-19 10:57:54.691439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.962 [2024-11-19 10:57:54.691446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.962 [2024-11-19 10:57:54.703612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.962 [2024-11-19 10:57:54.704047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-19 10:57:54.704098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.962 [2024-11-19 10:57:54.704122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.962 [2024-11-19 10:57:54.704715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.962 [2024-11-19 10:57:54.705258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.962 [2024-11-19 10:57:54.705266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.962 [2024-11-19 10:57:54.705273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.962 [2024-11-19 10:57:54.705279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.962 [2024-11-19 10:57:54.716490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.962 [2024-11-19 10:57:54.716925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-19 10:57:54.716941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.962 [2024-11-19 10:57:54.716948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.962 [2024-11-19 10:57:54.717115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.962 [2024-11-19 10:57:54.717287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.962 [2024-11-19 10:57:54.717295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.962 [2024-11-19 10:57:54.717302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.962 [2024-11-19 10:57:54.717308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.962 [2024-11-19 10:57:54.729237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.962 [2024-11-19 10:57:54.729660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-19 10:57:54.729704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.962 [2024-11-19 10:57:54.729727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.962 [2024-11-19 10:57:54.730211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.962 [2024-11-19 10:57:54.730380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.962 [2024-11-19 10:57:54.730388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.962 [2024-11-19 10:57:54.730394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.962 [2024-11-19 10:57:54.730400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.962 [2024-11-19 10:57:54.742057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.962 [2024-11-19 10:57:54.742463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-19 10:57:54.742480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:04.962 [2024-11-19 10:57:54.742487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:04.962 [2024-11-19 10:57:54.742657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:04.962 [2024-11-19 10:57:54.742824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.962 [2024-11-19 10:57:54.742832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.963 [2024-11-19 10:57:54.742837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.963 [2024-11-19 10:57:54.742844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.223 [2024-11-19 10:57:54.755010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.223 [2024-11-19 10:57:54.755417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.223 [2024-11-19 10:57:54.755433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.223 [2024-11-19 10:57:54.755440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.223 [2024-11-19 10:57:54.755612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.223 [2024-11-19 10:57:54.755783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.223 [2024-11-19 10:57:54.755791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.223 [2024-11-19 10:57:54.755798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.223 [2024-11-19 10:57:54.755804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.223 [2024-11-19 10:57:54.767739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.223 [2024-11-19 10:57:54.768155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.223 [2024-11-19 10:57:54.768198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.223 [2024-11-19 10:57:54.768236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.223 [2024-11-19 10:57:54.768816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.223 [2024-11-19 10:57:54.769349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.223 [2024-11-19 10:57:54.769357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.223 [2024-11-19 10:57:54.769363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.223 [2024-11-19 10:57:54.769369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.223 [2024-11-19 10:57:54.780577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.223 [2024-11-19 10:57:54.780964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.223 [2024-11-19 10:57:54.780979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.223 [2024-11-19 10:57:54.780986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.223 [2024-11-19 10:57:54.781144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.223 [2024-11-19 10:57:54.781327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.223 [2024-11-19 10:57:54.781345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.223 [2024-11-19 10:57:54.781352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.223 [2024-11-19 10:57:54.781358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.223 [2024-11-19 10:57:54.793478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.223 [2024-11-19 10:57:54.793888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.223 [2024-11-19 10:57:54.793904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.223 [2024-11-19 10:57:54.793911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.223 [2024-11-19 10:57:54.794077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.223 [2024-11-19 10:57:54.794266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.224 [2024-11-19 10:57:54.794275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.224 [2024-11-19 10:57:54.794281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.224 [2024-11-19 10:57:54.794287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.224 [2024-11-19 10:57:54.806295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.224 [2024-11-19 10:57:54.806661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.224 [2024-11-19 10:57:54.806677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.224 [2024-11-19 10:57:54.806684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.224 [2024-11-19 10:57:54.806841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.224 [2024-11-19 10:57:54.806999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.224 [2024-11-19 10:57:54.807007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.224 [2024-11-19 10:57:54.807013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.224 [2024-11-19 10:57:54.807018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.224 [2024-11-19 10:57:54.819093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.224 [2024-11-19 10:57:54.819515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.224 [2024-11-19 10:57:54.819558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.224 [2024-11-19 10:57:54.819581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.224 [2024-11-19 10:57:54.820159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.224 [2024-11-19 10:57:54.820657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.224 [2024-11-19 10:57:54.820675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.224 [2024-11-19 10:57:54.820689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.224 [2024-11-19 10:57:54.820703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.224 [2024-11-19 10:57:54.833995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.224 [2024-11-19 10:57:54.834501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.224 [2024-11-19 10:57:54.834545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.224 [2024-11-19 10:57:54.834569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.224 [2024-11-19 10:57:54.835139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.224 [2024-11-19 10:57:54.835399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.224 [2024-11-19 10:57:54.835411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.224 [2024-11-19 10:57:54.835421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.224 [2024-11-19 10:57:54.835430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.224 [2024-11-19 10:57:54.846923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.224 [2024-11-19 10:57:54.847270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.224 [2024-11-19 10:57:54.847287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.224 [2024-11-19 10:57:54.847294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.224 [2024-11-19 10:57:54.847461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.224 [2024-11-19 10:57:54.847630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.224 [2024-11-19 10:57:54.847638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.224 [2024-11-19 10:57:54.847645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.224 [2024-11-19 10:57:54.847650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.224 [2024-11-19 10:57:54.859678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.224 [2024-11-19 10:57:54.860103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.224 [2024-11-19 10:57:54.860147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.224 [2024-11-19 10:57:54.860169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.224 [2024-11-19 10:57:54.860605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.224 [2024-11-19 10:57:54.860773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.224 [2024-11-19 10:57:54.860781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.224 [2024-11-19 10:57:54.860787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.224 [2024-11-19 10:57:54.860793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.224 [2024-11-19 10:57:54.872498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.224 [2024-11-19 10:57:54.872884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.224 [2024-11-19 10:57:54.872903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.224 [2024-11-19 10:57:54.872910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.224 [2024-11-19 10:57:54.873068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.224 [2024-11-19 10:57:54.873247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.224 [2024-11-19 10:57:54.873256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.224 [2024-11-19 10:57:54.873262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.224 [2024-11-19 10:57:54.873268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.224 [2024-11-19 10:57:54.885217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.224 [2024-11-19 10:57:54.885610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.224 [2024-11-19 10:57:54.885652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.224 [2024-11-19 10:57:54.885674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.224 [2024-11-19 10:57:54.886266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.224 [2024-11-19 10:57:54.886756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.224 [2024-11-19 10:57:54.886764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.224 [2024-11-19 10:57:54.886770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.224 [2024-11-19 10:57:54.886776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.224 [2024-11-19 10:57:54.897990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.224 [2024-11-19 10:57:54.898392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.224 [2024-11-19 10:57:54.898409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.224 [2024-11-19 10:57:54.898416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.224 [2024-11-19 10:57:54.898583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.224 [2024-11-19 10:57:54.898749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.224 [2024-11-19 10:57:54.898757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.224 [2024-11-19 10:57:54.898763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.224 [2024-11-19 10:57:54.898770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.224 [2024-11-19 10:57:54.910792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.224 [2024-11-19 10:57:54.911224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.224 [2024-11-19 10:57:54.911268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.224 [2024-11-19 10:57:54.911291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.224 [2024-11-19 10:57:54.911808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.224 [2024-11-19 10:57:54.911975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.224 [2024-11-19 10:57:54.911983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.224 [2024-11-19 10:57:54.911989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.224 [2024-11-19 10:57:54.911995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.224 [2024-11-19 10:57:54.923517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.224 [2024-11-19 10:57:54.923916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.224 [2024-11-19 10:57:54.923959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.224 [2024-11-19 10:57:54.923981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.225 [2024-11-19 10:57:54.924458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.225 [2024-11-19 10:57:54.924625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.225 [2024-11-19 10:57:54.924633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.225 [2024-11-19 10:57:54.924639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.225 [2024-11-19 10:57:54.924645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.225 [2024-11-19 10:57:54.936288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.225 [2024-11-19 10:57:54.936705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.225 [2024-11-19 10:57:54.936721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.225 [2024-11-19 10:57:54.936728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.225 [2024-11-19 10:57:54.936895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.225 [2024-11-19 10:57:54.937066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.225 [2024-11-19 10:57:54.937074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.225 [2024-11-19 10:57:54.937080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.225 [2024-11-19 10:57:54.937087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.225 [2024-11-19 10:57:54.949297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.225 [2024-11-19 10:57:54.949692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.225 [2024-11-19 10:57:54.949708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.225 [2024-11-19 10:57:54.949715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.225 [2024-11-19 10:57:54.949886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.225 [2024-11-19 10:57:54.950057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.225 [2024-11-19 10:57:54.950066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.225 [2024-11-19 10:57:54.950076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.225 [2024-11-19 10:57:54.950083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.225 [2024-11-19 10:57:54.962229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.225 [2024-11-19 10:57:54.962656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.225 [2024-11-19 10:57:54.962672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.225 [2024-11-19 10:57:54.962679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.225 [2024-11-19 10:57:54.962846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.225 [2024-11-19 10:57:54.963017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.225 [2024-11-19 10:57:54.963026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.225 [2024-11-19 10:57:54.963032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.225 [2024-11-19 10:57:54.963038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.225 [2024-11-19 10:57:54.975158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.225 [2024-11-19 10:57:54.975590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.225 [2024-11-19 10:57:54.975635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.225 [2024-11-19 10:57:54.975658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.225 [2024-11-19 10:57:54.976070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.225 [2024-11-19 10:57:54.976242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.225 [2024-11-19 10:57:54.976251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.225 [2024-11-19 10:57:54.976257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.225 [2024-11-19 10:57:54.976263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.225 [2024-11-19 10:57:54.988014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.225 [2024-11-19 10:57:54.988434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.225 [2024-11-19 10:57:54.988450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.225 [2024-11-19 10:57:54.988458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.225 [2024-11-19 10:57:54.988626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.225 [2024-11-19 10:57:54.988791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.225 [2024-11-19 10:57:54.988799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.225 [2024-11-19 10:57:54.988806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.225 [2024-11-19 10:57:54.988812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.225 [2024-11-19 10:57:55.000885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.225 [2024-11-19 10:57:55.001221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.225 [2024-11-19 10:57:55.001254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.225 [2024-11-19 10:57:55.001261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.225 [2024-11-19 10:57:55.001428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.225 [2024-11-19 10:57:55.001594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.225 [2024-11-19 10:57:55.001602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.225 [2024-11-19 10:57:55.001608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.225 [2024-11-19 10:57:55.001613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.488 [2024-11-19 10:57:55.013976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.488 [2024-11-19 10:57:55.014398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-11-19 10:57:55.014416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.488 [2024-11-19 10:57:55.014423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.488 [2024-11-19 10:57:55.014594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.488 [2024-11-19 10:57:55.014766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.488 [2024-11-19 10:57:55.014774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.488 [2024-11-19 10:57:55.014780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.488 [2024-11-19 10:57:55.014786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.488 [2024-11-19 10:57:55.026935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.488 [2024-11-19 10:57:55.027292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-11-19 10:57:55.027309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.488 [2024-11-19 10:57:55.027316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.488 [2024-11-19 10:57:55.027483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.488 [2024-11-19 10:57:55.027651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.488 [2024-11-19 10:57:55.027659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.488 [2024-11-19 10:57:55.027665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.488 [2024-11-19 10:57:55.027672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.488 [2024-11-19 10:57:55.039938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.488 [2024-11-19 10:57:55.040269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-11-19 10:57:55.040286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.488 [2024-11-19 10:57:55.040296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.488 [2024-11-19 10:57:55.040462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.488 [2024-11-19 10:57:55.040631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.488 [2024-11-19 10:57:55.040639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.488 [2024-11-19 10:57:55.040645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.488 [2024-11-19 10:57:55.040651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.488 [2024-11-19 10:57:55.052860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.488 [2024-11-19 10:57:55.053311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-11-19 10:57:55.053328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.488 [2024-11-19 10:57:55.053335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.488 [2024-11-19 10:57:55.053501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.488 [2024-11-19 10:57:55.053667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.488 [2024-11-19 10:57:55.053675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.488 [2024-11-19 10:57:55.053681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.488 [2024-11-19 10:57:55.053687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.488 [2024-11-19 10:57:55.065695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.488 [2024-11-19 10:57:55.066130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-11-19 10:57:55.066174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.488 [2024-11-19 10:57:55.066197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.488 [2024-11-19 10:57:55.066790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.488 [2024-11-19 10:57:55.067171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.488 [2024-11-19 10:57:55.067179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.488 [2024-11-19 10:57:55.067185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.488 [2024-11-19 10:57:55.067191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.488 [2024-11-19 10:57:55.078624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.488 [2024-11-19 10:57:55.078975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-11-19 10:57:55.078991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.488 [2024-11-19 10:57:55.078998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.488 [2024-11-19 10:57:55.079165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.488 [2024-11-19 10:57:55.079342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.488 [2024-11-19 10:57:55.079351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.488 [2024-11-19 10:57:55.079357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.488 [2024-11-19 10:57:55.079364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.488 [2024-11-19 10:57:55.091362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.488 [2024-11-19 10:57:55.091800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-11-19 10:57:55.091816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.488 [2024-11-19 10:57:55.091823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.488 [2024-11-19 10:57:55.091990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.488 [2024-11-19 10:57:55.092156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.488 [2024-11-19 10:57:55.092164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.488 [2024-11-19 10:57:55.092170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.488 [2024-11-19 10:57:55.092175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4084526 Killed "${NVMF_APP[@]}" "$@" 00:30:05.488 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:05.488 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:05.488 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.488 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.488 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.488 [2024-11-19 10:57:55.104346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.488 [2024-11-19 10:57:55.104774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.488 [2024-11-19 10:57:55.104790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.488 [2024-11-19 10:57:55.104797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.488 [2024-11-19 10:57:55.104968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.488 [2024-11-19 10:57:55.105139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.488 [2024-11-19 10:57:55.105148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.488 [2024-11-19 10:57:55.105154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.488 [2024-11-19 10:57:55.105160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.488 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4085900 00:30:05.488 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4085900 00:30:05.488 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:05.488 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 4085900 ']' 00:30:05.488 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.489 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.489 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.489 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.489 10:57:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.489 [2024-11-19 10:57:55.117304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.489 [2024-11-19 10:57:55.117730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-11-19 10:57:55.117745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.489 [2024-11-19 10:57:55.117752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.489 [2024-11-19 10:57:55.117923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.489 [2024-11-19 10:57:55.118095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.489 [2024-11-19 10:57:55.118103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.489 [2024-11-19 10:57:55.118110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.489 [2024-11-19 10:57:55.118116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.489 [2024-11-19 10:57:55.130262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.489 [2024-11-19 10:57:55.130687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-11-19 10:57:55.130703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.489 [2024-11-19 10:57:55.130710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.489 [2024-11-19 10:57:55.130882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.489 [2024-11-19 10:57:55.131054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.489 [2024-11-19 10:57:55.131062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.489 [2024-11-19 10:57:55.131068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.489 [2024-11-19 10:57:55.131074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.489 [2024-11-19 10:57:55.143263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.489 [2024-11-19 10:57:55.143694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-11-19 10:57:55.143710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.489 [2024-11-19 10:57:55.143718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.489 [2024-11-19 10:57:55.143889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.489 [2024-11-19 10:57:55.144060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.489 [2024-11-19 10:57:55.144073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.489 [2024-11-19 10:57:55.144079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.489 [2024-11-19 10:57:55.144085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.489 [2024-11-19 10:57:55.156251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.489 [2024-11-19 10:57:55.156683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-11-19 10:57:55.156700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.489 [2024-11-19 10:57:55.156707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.489 [2024-11-19 10:57:55.156880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.489 [2024-11-19 10:57:55.157052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.489 [2024-11-19 10:57:55.157060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.489 [2024-11-19 10:57:55.157067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.489 [2024-11-19 10:57:55.157073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.489 [2024-11-19 10:57:55.157621] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:30:05.489 [2024-11-19 10:57:55.157660] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.489 [2024-11-19 10:57:55.169329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.489 [2024-11-19 10:57:55.169765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-11-19 10:57:55.169783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.489 [2024-11-19 10:57:55.169791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.489 [2024-11-19 10:57:55.169963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.489 [2024-11-19 10:57:55.170135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.489 [2024-11-19 10:57:55.170144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.489 [2024-11-19 10:57:55.170151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.489 [2024-11-19 10:57:55.170157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.489 [2024-11-19 10:57:55.182240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.489 [2024-11-19 10:57:55.182683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-11-19 10:57:55.182700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.489 [2024-11-19 10:57:55.182707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.489 [2024-11-19 10:57:55.182880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.489 [2024-11-19 10:57:55.183051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.489 [2024-11-19 10:57:55.183062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.489 [2024-11-19 10:57:55.183069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.489 [2024-11-19 10:57:55.183075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.489 [2024-11-19 10:57:55.195198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.489 [2024-11-19 10:57:55.195664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-11-19 10:57:55.195681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.489 [2024-11-19 10:57:55.195689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.489 [2024-11-19 10:57:55.195862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.489 [2024-11-19 10:57:55.196034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.489 [2024-11-19 10:57:55.196043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.489 [2024-11-19 10:57:55.196050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.489 [2024-11-19 10:57:55.196057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.489 [2024-11-19 10:57:55.208241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.489 [2024-11-19 10:57:55.208607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-11-19 10:57:55.208625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.489 [2024-11-19 10:57:55.208633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.489 [2024-11-19 10:57:55.208806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.489 [2024-11-19 10:57:55.208978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.489 [2024-11-19 10:57:55.208987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.489 [2024-11-19 10:57:55.208994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.489 [2024-11-19 10:57:55.209001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.489 [2024-11-19 10:57:55.221322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.489 [2024-11-19 10:57:55.221672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.489 [2024-11-19 10:57:55.221690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.489 [2024-11-19 10:57:55.221697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.489 [2024-11-19 10:57:55.221869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.489 [2024-11-19 10:57:55.222041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.489 [2024-11-19 10:57:55.222049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.489 [2024-11-19 10:57:55.222057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.489 [2024-11-19 10:57:55.222067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.489 [2024-11-19 10:57:55.234348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.489 [2024-11-19 10:57:55.234719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-11-19 10:57:55.234735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.490 [2024-11-19 10:57:55.234743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.490 [2024-11-19 10:57:55.234914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.490 [2024-11-19 10:57:55.235087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.490 [2024-11-19 10:57:55.235095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.490 [2024-11-19 10:57:55.235102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.490 [2024-11-19 10:57:55.235109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.490 [2024-11-19 10:57:55.239289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:05.490 [2024-11-19 10:57:55.247349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.490 [2024-11-19 10:57:55.247774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-11-19 10:57:55.247792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.490 [2024-11-19 10:57:55.247801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.490 [2024-11-19 10:57:55.247974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.490 [2024-11-19 10:57:55.248149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.490 [2024-11-19 10:57:55.248158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.490 [2024-11-19 10:57:55.248165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.490 [2024-11-19 10:57:55.248172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.490 [2024-11-19 10:57:55.260275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.490 [2024-11-19 10:57:55.260718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-11-19 10:57:55.260734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.490 [2024-11-19 10:57:55.260741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.490 [2024-11-19 10:57:55.260913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.490 [2024-11-19 10:57:55.261084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.490 [2024-11-19 10:57:55.261092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.490 [2024-11-19 10:57:55.261099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.490 [2024-11-19 10:57:55.261106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.490 [2024-11-19 10:57:55.273293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.490 [2024-11-19 10:57:55.273725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.490 [2024-11-19 10:57:55.273740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.490 [2024-11-19 10:57:55.273748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.490 [2024-11-19 10:57:55.273919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.490 [2024-11-19 10:57:55.274091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.490 [2024-11-19 10:57:55.274099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.490 [2024-11-19 10:57:55.274106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.490 [2024-11-19 10:57:55.274112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.750 [2024-11-19 10:57:55.282128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.750 [2024-11-19 10:57:55.282151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.750 [2024-11-19 10:57:55.282158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.750 [2024-11-19 10:57:55.282164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.750 [2024-11-19 10:57:55.282169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.750 [2024-11-19 10:57:55.283424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.750 [2024-11-19 10:57:55.283531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.750 [2024-11-19 10:57:55.283532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.750 [2024-11-19 10:57:55.286354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.750 [2024-11-19 10:57:55.286796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.750 [2024-11-19 10:57:55.286814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.750 [2024-11-19 10:57:55.286822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.750 [2024-11-19 10:57:55.286994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.750 [2024-11-19 10:57:55.287168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.750 [2024-11-19 10:57:55.287176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.750 [2024-11-19 10:57:55.287182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.750 [2024-11-19 10:57:55.287189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.750 [2024-11-19 10:57:55.299335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.750 [2024-11-19 10:57:55.299794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.750 [2024-11-19 10:57:55.299813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.750 [2024-11-19 10:57:55.299822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.750 [2024-11-19 10:57:55.299995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.750 [2024-11-19 10:57:55.300167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.750 [2024-11-19 10:57:55.300179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.750 [2024-11-19 10:57:55.300186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.750 [2024-11-19 10:57:55.300193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.750 [2024-11-19 10:57:55.312338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.750 [2024-11-19 10:57:55.312794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.750 [2024-11-19 10:57:55.312814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.750 [2024-11-19 10:57:55.312822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.750 [2024-11-19 10:57:55.312995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.750 [2024-11-19 10:57:55.313168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.750 [2024-11-19 10:57:55.313176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.750 [2024-11-19 10:57:55.313183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.750 [2024-11-19 10:57:55.313190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.750 [2024-11-19 10:57:55.325330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.750 [2024-11-19 10:57:55.325757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.750 [2024-11-19 10:57:55.325776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.750 [2024-11-19 10:57:55.325784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.750 [2024-11-19 10:57:55.325957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.750 [2024-11-19 10:57:55.326130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.750 [2024-11-19 10:57:55.326138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.750 [2024-11-19 10:57:55.326145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.750 [2024-11-19 10:57:55.326152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.750 [2024-11-19 10:57:55.338295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.750 [2024-11-19 10:57:55.338756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.750 [2024-11-19 10:57:55.338774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.750 [2024-11-19 10:57:55.338783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.751 [2024-11-19 10:57:55.338956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.751 [2024-11-19 10:57:55.339129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.751 [2024-11-19 10:57:55.339137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.751 [2024-11-19 10:57:55.339145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.751 [2024-11-19 10:57:55.339157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.751 [2024-11-19 10:57:55.351293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.751 [2024-11-19 10:57:55.351734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.751 [2024-11-19 10:57:55.351751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.751 [2024-11-19 10:57:55.351759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.751 [2024-11-19 10:57:55.351931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.751 [2024-11-19 10:57:55.352104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.751 [2024-11-19 10:57:55.352112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.751 [2024-11-19 10:57:55.352119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.751 [2024-11-19 10:57:55.352125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.751 [2024-11-19 10:57:55.364393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.751 [2024-11-19 10:57:55.364801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.751 [2024-11-19 10:57:55.364817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.751 [2024-11-19 10:57:55.364825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.751 [2024-11-19 10:57:55.364995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.751 [2024-11-19 10:57:55.365167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.751 [2024-11-19 10:57:55.365175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.751 [2024-11-19 10:57:55.365182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.751 [2024-11-19 10:57:55.365188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.751 [2024-11-19 10:57:55.377477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.751 [2024-11-19 10:57:55.377884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.751 [2024-11-19 10:57:55.377901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.751 [2024-11-19 10:57:55.377908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.751 [2024-11-19 10:57:55.378080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.751 [2024-11-19 10:57:55.378256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.751 [2024-11-19 10:57:55.378265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.751 [2024-11-19 10:57:55.378272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.751 [2024-11-19 10:57:55.378278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.751 [2024-11-19 10:57:55.390575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.751 [2024-11-19 10:57:55.391007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.751 [2024-11-19 10:57:55.391023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.751 [2024-11-19 10:57:55.391031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.751 [2024-11-19 10:57:55.391207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.751 [2024-11-19 10:57:55.391380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.751 [2024-11-19 10:57:55.391388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.751 [2024-11-19 10:57:55.391395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.751 [2024-11-19 10:57:55.391401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.751 [2024-11-19 10:57:55.403533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.751 [2024-11-19 10:57:55.403899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.751 [2024-11-19 10:57:55.403915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.751 [2024-11-19 10:57:55.403923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.751 [2024-11-19 10:57:55.404094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.751 [2024-11-19 10:57:55.404272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.751 [2024-11-19 10:57:55.404280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.751 [2024-11-19 10:57:55.404287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.751 [2024-11-19 10:57:55.404293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.751 [2024-11-19 10:57:55.416565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.751 [2024-11-19 10:57:55.416995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.751 [2024-11-19 10:57:55.417011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.751 [2024-11-19 10:57:55.417019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.751 [2024-11-19 10:57:55.417189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.751 [2024-11-19 10:57:55.417365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.751 [2024-11-19 10:57:55.417374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.751 [2024-11-19 10:57:55.417381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.751 [2024-11-19 10:57:55.417387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.751 5074.83 IOPS, 19.82 MiB/s [2024-11-19T09:57:55.543Z] [2024-11-19 10:57:55.429664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.751 [2024-11-19 10:57:55.430094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.751 [2024-11-19 10:57:55.430111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.751 [2024-11-19 10:57:55.430118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.751 [2024-11-19 10:57:55.430298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.751 [2024-11-19 10:57:55.430470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.751 [2024-11-19 10:57:55.430478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.751 [2024-11-19 10:57:55.430484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.751 [2024-11-19 10:57:55.430491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.751 [2024-11-19 10:57:55.442630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.751 [2024-11-19 10:57:55.443044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.751 [2024-11-19 10:57:55.443060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.751 [2024-11-19 10:57:55.443068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.751 [2024-11-19 10:57:55.443244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.751 [2024-11-19 10:57:55.443417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.751 [2024-11-19 10:57:55.443425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.751 [2024-11-19 10:57:55.443432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.751 [2024-11-19 10:57:55.443438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.751 [2024-11-19 10:57:55.455706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.751 [2024-11-19 10:57:55.456136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.751 [2024-11-19 10:57:55.456153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.751 [2024-11-19 10:57:55.456160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.751 [2024-11-19 10:57:55.456336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.751 [2024-11-19 10:57:55.456509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.751 [2024-11-19 10:57:55.456518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.751 [2024-11-19 10:57:55.456524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.751 [2024-11-19 10:57:55.456530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.751 [2024-11-19 10:57:55.468748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.751 [2024-11-19 10:57:55.469181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.751 [2024-11-19 10:57:55.469197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.752 [2024-11-19 10:57:55.469208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.752 [2024-11-19 10:57:55.469380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.752 [2024-11-19 10:57:55.469551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.752 [2024-11-19 10:57:55.469562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.752 [2024-11-19 10:57:55.469569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.752 [2024-11-19 10:57:55.469575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.752 [2024-11-19 10:57:55.481836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.752 [2024-11-19 10:57:55.482268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.752 [2024-11-19 10:57:55.482285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.752 [2024-11-19 10:57:55.482292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.752 [2024-11-19 10:57:55.482463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.752 [2024-11-19 10:57:55.482636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.752 [2024-11-19 10:57:55.482644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.752 [2024-11-19 10:57:55.482650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.752 [2024-11-19 10:57:55.482656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.752 [2024-11-19 10:57:55.494932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.752 [2024-11-19 10:57:55.495339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.752 [2024-11-19 10:57:55.495357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.752 [2024-11-19 10:57:55.495364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.752 [2024-11-19 10:57:55.495535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.752 [2024-11-19 10:57:55.495707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.752 [2024-11-19 10:57:55.495715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.752 [2024-11-19 10:57:55.495722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.752 [2024-11-19 10:57:55.495728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.752 [2024-11-19 10:57:55.508013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.752 [2024-11-19 10:57:55.508444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.752 [2024-11-19 10:57:55.508460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.752 [2024-11-19 10:57:55.508467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.752 [2024-11-19 10:57:55.508638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.752 [2024-11-19 10:57:55.508810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.752 [2024-11-19 10:57:55.508819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.752 [2024-11-19 10:57:55.508825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.752 [2024-11-19 10:57:55.508836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.752 [2024-11-19 10:57:55.520972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.752 [2024-11-19 10:57:55.521298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.752 [2024-11-19 10:57:55.521316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.752 [2024-11-19 10:57:55.521323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.752 [2024-11-19 10:57:55.521494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.752 [2024-11-19 10:57:55.521666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.752 [2024-11-19 10:57:55.521674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.752 [2024-11-19 10:57:55.521681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.752 [2024-11-19 10:57:55.521687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.752 [2024-11-19 10:57:55.533970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.752 [2024-11-19 10:57:55.534401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.752 [2024-11-19 10:57:55.534418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:05.752 [2024-11-19 10:57:55.534425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:05.752 [2024-11-19 10:57:55.534596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:05.752 [2024-11-19 10:57:55.534769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.752 [2024-11-19 10:57:55.534778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.752 [2024-11-19 10:57:55.534784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.752 [2024-11-19 10:57:55.534790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.012 [2024-11-19 10:57:55.546922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.012 [2024-11-19 10:57:55.547351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.012 [2024-11-19 10:57:55.547368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.012 [2024-11-19 10:57:55.547375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.012 [2024-11-19 10:57:55.547546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.012 [2024-11-19 10:57:55.547718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.012 [2024-11-19 10:57:55.547726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.012 [2024-11-19 10:57:55.547732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.012 [2024-11-19 10:57:55.547738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.012 [2024-11-19 10:57:55.560009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.012 [2024-11-19 10:57:55.560426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.012 [2024-11-19 10:57:55.560443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.012 [2024-11-19 10:57:55.560450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.012 [2024-11-19 10:57:55.560621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.012 [2024-11-19 10:57:55.560797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.012 [2024-11-19 10:57:55.560805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.012 [2024-11-19 10:57:55.560811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.012 [2024-11-19 10:57:55.560818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.012 [2024-11-19 10:57:55.573090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.012 [2024-11-19 10:57:55.573525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.012 [2024-11-19 10:57:55.573542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.012 [2024-11-19 10:57:55.573549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.012 [2024-11-19 10:57:55.573720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.012 [2024-11-19 10:57:55.573893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.012 [2024-11-19 10:57:55.573901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.012 [2024-11-19 10:57:55.573907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.012 [2024-11-19 10:57:55.573914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.012 [2024-11-19 10:57:55.586186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.012 [2024-11-19 10:57:55.586620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.012 [2024-11-19 10:57:55.586636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.012 [2024-11-19 10:57:55.586644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.012 [2024-11-19 10:57:55.586815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.012 [2024-11-19 10:57:55.586988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.012 [2024-11-19 10:57:55.586996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.012 [2024-11-19 10:57:55.587002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.012 [2024-11-19 10:57:55.587008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.012 [2024-11-19 10:57:55.599134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.012 [2024-11-19 10:57:55.599543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.012 [2024-11-19 10:57:55.599560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.012 [2024-11-19 10:57:55.599567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.012 [2024-11-19 10:57:55.599742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.012 [2024-11-19 10:57:55.599914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.012 [2024-11-19 10:57:55.599922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.012 [2024-11-19 10:57:55.599929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.012 [2024-11-19 10:57:55.599935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.012 [2024-11-19 10:57:55.612230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.012 [2024-11-19 10:57:55.612632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.012 [2024-11-19 10:57:55.612648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.012 [2024-11-19 10:57:55.612655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.012 [2024-11-19 10:57:55.612827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.012 [2024-11-19 10:57:55.612999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.012 [2024-11-19 10:57:55.613007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.012 [2024-11-19 10:57:55.613013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.012 [2024-11-19 10:57:55.613020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.012 [2024-11-19 10:57:55.625293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.012 [2024-11-19 10:57:55.625722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.012 [2024-11-19 10:57:55.625738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.012 [2024-11-19 10:57:55.625745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.012 [2024-11-19 10:57:55.625916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.012 [2024-11-19 10:57:55.626088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.012 [2024-11-19 10:57:55.626096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.012 [2024-11-19 10:57:55.626103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.012 [2024-11-19 10:57:55.626109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.012 [2024-11-19 10:57:55.638238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.012 [2024-11-19 10:57:55.638668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.012 [2024-11-19 10:57:55.638684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.012 [2024-11-19 10:57:55.638691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.012 [2024-11-19 10:57:55.638863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.012 [2024-11-19 10:57:55.639037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.012 [2024-11-19 10:57:55.639048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.012 [2024-11-19 10:57:55.639054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.012 [2024-11-19 10:57:55.639060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.012 [2024-11-19 10:57:55.651329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.012 [2024-11-19 10:57:55.651767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.012 [2024-11-19 10:57:55.651783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.012 [2024-11-19 10:57:55.651789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.012 [2024-11-19 10:57:55.651960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.012 [2024-11-19 10:57:55.652132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.012 [2024-11-19 10:57:55.652140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.012 [2024-11-19 10:57:55.652146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.012 [2024-11-19 10:57:55.652152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.013 [2024-11-19 10:57:55.664274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.013 [2024-11-19 10:57:55.664697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.013 [2024-11-19 10:57:55.664713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.013 [2024-11-19 10:57:55.664720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.013 [2024-11-19 10:57:55.664891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.013 [2024-11-19 10:57:55.665062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.013 [2024-11-19 10:57:55.665071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.013 [2024-11-19 10:57:55.665077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.013 [2024-11-19 10:57:55.665083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.013 [2024-11-19 10:57:55.677344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.013 [2024-11-19 10:57:55.677773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.013 [2024-11-19 10:57:55.677790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.013 [2024-11-19 10:57:55.677797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.013 [2024-11-19 10:57:55.677968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.013 [2024-11-19 10:57:55.678140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.013 [2024-11-19 10:57:55.678148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.013 [2024-11-19 10:57:55.678154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.013 [2024-11-19 10:57:55.678164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.013 [2024-11-19 10:57:55.690291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.013 [2024-11-19 10:57:55.690723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.013 [2024-11-19 10:57:55.690738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.013 [2024-11-19 10:57:55.690746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.013 [2024-11-19 10:57:55.690917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.013 [2024-11-19 10:57:55.691090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.013 [2024-11-19 10:57:55.691098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.013 [2024-11-19 10:57:55.691104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.013 [2024-11-19 10:57:55.691111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.013 [2024-11-19 10:57:55.703244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.013 [2024-11-19 10:57:55.703664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.013 [2024-11-19 10:57:55.703680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.013 [2024-11-19 10:57:55.703687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.013 [2024-11-19 10:57:55.703858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.013 [2024-11-19 10:57:55.704031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.013 [2024-11-19 10:57:55.704039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.013 [2024-11-19 10:57:55.704046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.013 [2024-11-19 10:57:55.704053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.013 [2024-11-19 10:57:55.716346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.013 [2024-11-19 10:57:55.716773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.013 [2024-11-19 10:57:55.716790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.013 [2024-11-19 10:57:55.716797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.013 [2024-11-19 10:57:55.716969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.013 [2024-11-19 10:57:55.717140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.013 [2024-11-19 10:57:55.717148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.013 [2024-11-19 10:57:55.717155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.013 [2024-11-19 10:57:55.717161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.013 [2024-11-19 10:57:55.729386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.013 [2024-11-19 10:57:55.729724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.013 [2024-11-19 10:57:55.729743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.013 [2024-11-19 10:57:55.729751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.013 [2024-11-19 10:57:55.729922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.013 [2024-11-19 10:57:55.730095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.013 [2024-11-19 10:57:55.730104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.013 [2024-11-19 10:57:55.730111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.013 [2024-11-19 10:57:55.730117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.013 [2024-11-19 10:57:55.742433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.013 [2024-11-19 10:57:55.742840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.013 [2024-11-19 10:57:55.742857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.013 [2024-11-19 10:57:55.742864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.013 [2024-11-19 10:57:55.743036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.013 [2024-11-19 10:57:55.743213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.013 [2024-11-19 10:57:55.743222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.013 [2024-11-19 10:57:55.743230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.013 [2024-11-19 10:57:55.743239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.013 [2024-11-19 10:57:55.755537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.013 [2024-11-19 10:57:55.755870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.013 [2024-11-19 10:57:55.755887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.013 [2024-11-19 10:57:55.755894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.013 [2024-11-19 10:57:55.756066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.013 [2024-11-19 10:57:55.756244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.013 [2024-11-19 10:57:55.756253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.013 [2024-11-19 10:57:55.756259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.013 [2024-11-19 10:57:55.756265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.013 [2024-11-19 10:57:55.768556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.013 [2024-11-19 10:57:55.768890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.013 [2024-11-19 10:57:55.768907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.013 [2024-11-19 10:57:55.768914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.013 [2024-11-19 10:57:55.769091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.013 [2024-11-19 10:57:55.769270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.013 [2024-11-19 10:57:55.769279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.013 [2024-11-19 10:57:55.769285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.013 [2024-11-19 10:57:55.769291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.013 [2024-11-19 10:57:55.781607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.013 [2024-11-19 10:57:55.781970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.013 [2024-11-19 10:57:55.781986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.013 [2024-11-19 10:57:55.781993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.013 [2024-11-19 10:57:55.782164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.013 [2024-11-19 10:57:55.782343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.013 [2024-11-19 10:57:55.782353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.013 [2024-11-19 10:57:55.782359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.014 [2024-11-19 10:57:55.782366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.014 [2024-11-19 10:57:55.794675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.014 [2024-11-19 10:57:55.794966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.014 [2024-11-19 10:57:55.794982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.014 [2024-11-19 10:57:55.794989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.014 [2024-11-19 10:57:55.795161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.014 [2024-11-19 10:57:55.795338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.014 [2024-11-19 10:57:55.795347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.014 [2024-11-19 10:57:55.795353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.014 [2024-11-19 10:57:55.795360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.273 [2024-11-19 10:57:55.807658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.273 [2024-11-19 10:57:55.808012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.273 [2024-11-19 10:57:55.808028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.273 [2024-11-19 10:57:55.808035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.273 [2024-11-19 10:57:55.808212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.273 [2024-11-19 10:57:55.808385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.273 [2024-11-19 10:57:55.808396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.273 [2024-11-19 10:57:55.808403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.273 [2024-11-19 10:57:55.808409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.273 [2024-11-19 10:57:55.820699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.273 [2024-11-19 10:57:55.820957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.273 [2024-11-19 10:57:55.820974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.273 [2024-11-19 10:57:55.820981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.273 [2024-11-19 10:57:55.821153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.273 [2024-11-19 10:57:55.821331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.273 [2024-11-19 10:57:55.821340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.273 [2024-11-19 10:57:55.821346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.273 [2024-11-19 10:57:55.821352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.273 [2024-11-19 10:57:55.833667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.273 [2024-11-19 10:57:55.834003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.273 [2024-11-19 10:57:55.834020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.273 [2024-11-19 10:57:55.834027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.273 [2024-11-19 10:57:55.834197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.273 [2024-11-19 10:57:55.834377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.273 [2024-11-19 10:57:55.834385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.273 [2024-11-19 10:57:55.834391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.273 [2024-11-19 10:57:55.834397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.273 [2024-11-19 10:57:55.846736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.273 [2024-11-19 10:57:55.847103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.273 [2024-11-19 10:57:55.847120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.273 [2024-11-19 10:57:55.847127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.273 [2024-11-19 10:57:55.847303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.273 [2024-11-19 10:57:55.847475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.273 [2024-11-19 10:57:55.847484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.273 [2024-11-19 10:57:55.847490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.273 [2024-11-19 10:57:55.847497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.273 [2024-11-19 10:57:55.859804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.273 [2024-11-19 10:57:55.860213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.273 [2024-11-19 10:57:55.860231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.273 [2024-11-19 10:57:55.860238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.273 [2024-11-19 10:57:55.860410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.273 [2024-11-19 10:57:55.860582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.273 [2024-11-19 10:57:55.860591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.273 [2024-11-19 10:57:55.860597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.273 [2024-11-19 10:57:55.860604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.274 [2024-11-19 10:57:55.872914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.274 [2024-11-19 10:57:55.873365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.274 [2024-11-19 10:57:55.873382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.274 [2024-11-19 10:57:55.873389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.274 [2024-11-19 10:57:55.873560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.274 [2024-11-19 10:57:55.873732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.274 [2024-11-19 10:57:55.873739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.274 [2024-11-19 10:57:55.873746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.274 [2024-11-19 10:57:55.873752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.274 [2024-11-19 10:57:55.885885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.274 [2024-11-19 10:57:55.886294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.274 [2024-11-19 10:57:55.886311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.274 [2024-11-19 10:57:55.886319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.274 [2024-11-19 10:57:55.886490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.274 [2024-11-19 10:57:55.886663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.274 [2024-11-19 10:57:55.886670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.274 [2024-11-19 10:57:55.886677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.274 [2024-11-19 10:57:55.886683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.274 [2024-11-19 10:57:55.898986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.274 [2024-11-19 10:57:55.899330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.274 [2024-11-19 10:57:55.899351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.274 [2024-11-19 10:57:55.899358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.274 [2024-11-19 10:57:55.899530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.274 [2024-11-19 10:57:55.899702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.274 [2024-11-19 10:57:55.899710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.274 [2024-11-19 10:57:55.899717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.274 [2024-11-19 10:57:55.899724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.274 [2024-11-19 10:57:55.911997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.274 [2024-11-19 10:57:55.912328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.274 [2024-11-19 10:57:55.912346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.274 [2024-11-19 10:57:55.912354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.274 [2024-11-19 10:57:55.912526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.274 [2024-11-19 10:57:55.912699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.274 [2024-11-19 10:57:55.912707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.274 [2024-11-19 10:57:55.912713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.274 [2024-11-19 10:57:55.912721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.274 [2024-11-19 10:57:55.925026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.274 [2024-11-19 10:57:55.925334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.274 [2024-11-19 10:57:55.925351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.274 [2024-11-19 10:57:55.925358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.274 [2024-11-19 10:57:55.925529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.274 [2024-11-19 10:57:55.925702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.274 [2024-11-19 10:57:55.925710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.274 [2024-11-19 10:57:55.925716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.274 [2024-11-19 10:57:55.925722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.274 [2024-11-19 10:57:55.938043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.274 [2024-11-19 10:57:55.938408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.274 [2024-11-19 10:57:55.938424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.274 [2024-11-19 10:57:55.938432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.274 [2024-11-19 10:57:55.938608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.274 [2024-11-19 10:57:55.938781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.274 [2024-11-19 10:57:55.938790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.274 [2024-11-19 10:57:55.938798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.274 [2024-11-19 10:57:55.938804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.274 [2024-11-19 10:57:55.951102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.274 [2024-11-19 10:57:55.951442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.274 [2024-11-19 10:57:55.951460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.274 [2024-11-19 10:57:55.951467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.274 [2024-11-19 10:57:55.951638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.274 [2024-11-19 10:57:55.951811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.274 [2024-11-19 10:57:55.951819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.274 [2024-11-19 10:57:55.951826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.274 [2024-11-19 10:57:55.951832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.274 [2024-11-19 10:57:55.964139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.274 [2024-11-19 10:57:55.964504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.274 [2024-11-19 10:57:55.964521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.274 [2024-11-19 10:57:55.964529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.274 [2024-11-19 10:57:55.964702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.274 [2024-11-19 10:57:55.964874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.274 [2024-11-19 10:57:55.964883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.274 [2024-11-19 10:57:55.964890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.274 [2024-11-19 10:57:55.964897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.274 [2024-11-19 10:57:55.977193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.274 [2024-11-19 10:57:55.977562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.274 [2024-11-19 10:57:55.977578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.274 [2024-11-19 10:57:55.977585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.274 [2024-11-19 10:57:55.977757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.274 [2024-11-19 10:57:55.977930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.274 [2024-11-19 10:57:55.977939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.274 [2024-11-19 10:57:55.977949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.274 [2024-11-19 10:57:55.977956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.274 [2024-11-19 10:57:55.990262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.274 [2024-11-19 10:57:55.990616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.274 [2024-11-19 10:57:55.990648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.274 [2024-11-19 10:57:55.990655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.274 [2024-11-19 10:57:55.990838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.274 [2024-11-19 10:57:55.991021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.274 [2024-11-19 10:57:55.991030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.274 [2024-11-19 10:57:55.991038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.275 [2024-11-19 10:57:55.991044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.275 [2024-11-19 10:57:56.003283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.275 [2024-11-19 10:57:56.003635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.275 [2024-11-19 10:57:56.003651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.275 [2024-11-19 10:57:56.003659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.275 [2024-11-19 10:57:56.003830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.275 [2024-11-19 10:57:56.004005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.275 [2024-11-19 10:57:56.004013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.275 [2024-11-19 10:57:56.004019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.275 [2024-11-19 10:57:56.004026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.275 [2024-11-19 10:57:56.016322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.275 [2024-11-19 10:57:56.016611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.275 [2024-11-19 10:57:56.016627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.275 [2024-11-19 10:57:56.016634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.275 [2024-11-19 10:57:56.016806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.275 [2024-11-19 10:57:56.016979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.275 [2024-11-19 10:57:56.016991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.275 [2024-11-19 10:57:56.016998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.275 [2024-11-19 10:57:56.017004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.275 [2024-11-19 10:57:56.029309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.275 [2024-11-19 10:57:56.029644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.275 [2024-11-19 10:57:56.029661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.275 [2024-11-19 10:57:56.029669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.275 [2024-11-19 10:57:56.029841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.275 [2024-11-19 10:57:56.030014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.275 [2024-11-19 10:57:56.030022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.275 [2024-11-19 10:57:56.030028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.275 [2024-11-19 10:57:56.030034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.275 [2024-11-19 10:57:56.042355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.275 [2024-11-19 10:57:56.042644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.275 [2024-11-19 10:57:56.042661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.275 [2024-11-19 10:57:56.042669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.275 [2024-11-19 10:57:56.042840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.275 [2024-11-19 10:57:56.043012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.275 [2024-11-19 10:57:56.043021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.275 [2024-11-19 10:57:56.043028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.275 [2024-11-19 10:57:56.043034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.275 [2024-11-19 10:57:56.048356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.275 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.275 [2024-11-19 10:57:56.055349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.275 [2024-11-19 10:57:56.055693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.275 [2024-11-19 10:57:56.055709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.275 [2024-11-19 10:57:56.055716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.275 [2024-11-19 10:57:56.055888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.275 [2024-11-19 10:57:56.056061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.275 [2024-11-19 10:57:56.056070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.275 [2024-11-19 10:57:56.056077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.275 [2024-11-19 10:57:56.056083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.534 [2024-11-19 10:57:56.068403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.534 [2024-11-19 10:57:56.068769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.534 [2024-11-19 10:57:56.068787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.534 [2024-11-19 10:57:56.068794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.534 [2024-11-19 10:57:56.068967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.534 [2024-11-19 10:57:56.069140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.534 [2024-11-19 10:57:56.069148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.534 [2024-11-19 10:57:56.069154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.534 [2024-11-19 10:57:56.069161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.534 [2024-11-19 10:57:56.081463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.534 [2024-11-19 10:57:56.081750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.534 [2024-11-19 10:57:56.081767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.534 [2024-11-19 10:57:56.081775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.534 [2024-11-19 10:57:56.081947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.534 [2024-11-19 10:57:56.082121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.534 [2024-11-19 10:57:56.082129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.534 [2024-11-19 10:57:56.082135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.534 [2024-11-19 10:57:56.082142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.534 Malloc0 00:30:06.534 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.534 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.534 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.534 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.534 [2024-11-19 10:57:56.094470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.534 [2024-11-19 10:57:56.094750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.534 [2024-11-19 10:57:56.094767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.534 [2024-11-19 10:57:56.094774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.534 [2024-11-19 10:57:56.094946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.534 [2024-11-19 10:57:56.095118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.534 [2024-11-19 10:57:56.095126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.534 [2024-11-19 10:57:56.095132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.534 [2024-11-19 10:57:56.095139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.534 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.534 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.534 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.534 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.534 [2024-11-19 10:57:56.107428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.534 [2024-11-19 10:57:56.107782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.534 [2024-11-19 10:57:56.107798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdd500 with addr=10.0.0.2, port=4420 00:30:06.534 [2024-11-19 10:57:56.107806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdd500 is same with the state(6) to be set 00:30:06.534 [2024-11-19 10:57:56.107978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdd500 (9): Bad file descriptor 00:30:06.534 [2024-11-19 10:57:56.108149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.534 [2024-11-19 10:57:56.108158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.534 [2024-11-19 10:57:56.108164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.534 [2024-11-19 10:57:56.108171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.534 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.534 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.534 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.535 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.535 [2024-11-19 10:57:56.112800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.535 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.535 10:57:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4084977 00:30:06.535 [2024-11-19 10:57:56.120468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.535 [2024-11-19 10:57:56.189926] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:07.725 4726.29 IOPS, 18.46 MiB/s [2024-11-19T09:57:58.450Z] 5551.38 IOPS, 21.69 MiB/s [2024-11-19T09:57:59.825Z] 6224.89 IOPS, 24.32 MiB/s [2024-11-19T09:58:00.772Z] 6739.70 IOPS, 26.33 MiB/s [2024-11-19T09:58:01.704Z] 7170.45 IOPS, 28.01 MiB/s [2024-11-19T09:58:02.635Z] 7547.00 IOPS, 29.48 MiB/s [2024-11-19T09:58:03.567Z] 7851.15 IOPS, 30.67 MiB/s [2024-11-19T09:58:04.500Z] 8120.21 IOPS, 31.72 MiB/s [2024-11-19T09:58:04.500Z] 8343.13 IOPS, 32.59 MiB/s 00:30:14.708 Latency(us) 00:30:14.708 [2024-11-19T09:58:04.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.708 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:14.708 Verification LBA range: start 0x0 length 0x4000 00:30:14.708 Nvme1n1 : 15.01 8348.07 32.61 13298.60 0.00 5893.84 477.87 12670.29 00:30:14.708 [2024-11-19T09:58:04.500Z] =================================================================================================================== 00:30:14.708 [2024-11-19T09:58:04.500Z] Total : 8348.07 32.61 13298.60 0.00 5893.84 477.87 12670.29 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.967 rmmod nvme_tcp 00:30:14.967 rmmod nvme_fabrics 00:30:14.967 rmmod nvme_keyring 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 4085900 ']' 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 4085900 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 4085900 ']' 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 4085900 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4085900 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4085900' 00:30:14.967 killing process with pid 4085900 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 4085900 00:30:14.967 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 4085900 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.227 10:58:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.765 10:58:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:17.766 00:30:17.766 real 0m25.961s 00:30:17.766 user 1m0.496s 00:30:17.766 sys 0m6.678s 00:30:17.766 10:58:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.766 10:58:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.766 ************************************ 00:30:17.766 END TEST nvmf_bdevperf 00:30:17.766 ************************************ 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.766 ************************************ 00:30:17.766 START TEST nvmf_target_disconnect 00:30:17.766 ************************************ 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:17.766 * Looking for test storage... 00:30:17.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:17.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.766 --rc genhtml_branch_coverage=1 00:30:17.766 --rc genhtml_function_coverage=1 00:30:17.766 --rc genhtml_legend=1 00:30:17.766 --rc geninfo_all_blocks=1 00:30:17.766 --rc geninfo_unexecuted_blocks=1 00:30:17.766 00:30:17.766 ' 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:17.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.766 --rc genhtml_branch_coverage=1 00:30:17.766 --rc genhtml_function_coverage=1 00:30:17.766 --rc genhtml_legend=1 00:30:17.766 --rc geninfo_all_blocks=1 00:30:17.766 --rc geninfo_unexecuted_blocks=1 00:30:17.766 00:30:17.766 ' 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:17.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.766 --rc genhtml_branch_coverage=1 00:30:17.766 --rc genhtml_function_coverage=1 00:30:17.766 --rc genhtml_legend=1 00:30:17.766 --rc geninfo_all_blocks=1 00:30:17.766 --rc geninfo_unexecuted_blocks=1 00:30:17.766 00:30:17.766 ' 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:17.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.766 --rc genhtml_branch_coverage=1 00:30:17.766 --rc genhtml_function_coverage=1 00:30:17.766 --rc genhtml_legend=1 00:30:17.766 --rc geninfo_all_blocks=1 00:30:17.766 --rc geninfo_unexecuted_blocks=1 00:30:17.766 00:30:17.766 ' 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.766 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:17.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.767 10:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:24.337 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:24.337 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.337 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:24.338 Found net devices under 0000:86:00.0: cvl_0_0 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:24.338 Found net devices under 0000:86:00.1: cvl_0_1 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:24.338 10:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:24.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:24.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:30:24.338 00:30:24.338 --- 10.0.0.2 ping statistics --- 00:30:24.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.338 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:24.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:24.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:30:24.338 00:30:24.338 --- 10.0.0.1 ping statistics --- 00:30:24.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.338 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:24.338 ************************************ 00:30:24.338 START TEST nvmf_target_disconnect_tc1 00:30:24.338 ************************************ 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:24.338 [2024-11-19 10:58:13.332118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-19 10:58:13.332244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd7ab0 with addr=10.0.0.2, port=4420 00:30:24.338 [2024-11-19 10:58:13.332288] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:24.338 [2024-11-19 10:58:13.332314] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:24.338 [2024-11-19 10:58:13.332333] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:24.338 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:24.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:24.338 Initializing NVMe Controllers 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:24.338 00:30:24.338 real 0m0.117s 00:30:24.338 user 0m0.045s 00:30:24.338 sys 0m0.072s 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.338 ************************************ 00:30:24.338 END TEST nvmf_target_disconnect_tc1 00:30:24.338 ************************************ 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.338 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 ************************************ 00:30:24.339 START TEST nvmf_target_disconnect_tc2 00:30:24.339 ************************************ 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4091053 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4091053 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4091053 ']' 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 [2024-11-19 10:58:13.469490] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:30:24.339 [2024-11-19 10:58:13.469529] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.339 [2024-11-19 10:58:13.549758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:24.339 [2024-11-19 10:58:13.591282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.339 [2024-11-19 10:58:13.591317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.339 [2024-11-19 10:58:13.591324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.339 [2024-11-19 10:58:13.591330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.339 [2024-11-19 10:58:13.591334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.339 [2024-11-19 10:58:13.593017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:24.339 [2024-11-19 10:58:13.593139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:24.339 [2024-11-19 10:58:13.593244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:24.339 [2024-11-19 10:58:13.593246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 Malloc0 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 [2024-11-19 10:58:13.768550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 [2024-11-19 10:58:13.800761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4091098 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:24.339 10:58:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:26.251 10:58:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4091053 00:30:26.251 10:58:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 [2024-11-19 10:58:15.828457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Write completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 [2024-11-19 10:58:15.828659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.251 Read completed with error (sct=0, sc=8) 00:30:26.251 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 [2024-11-19 10:58:15.828847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Read completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 Write completed with error (sct=0, sc=8) 00:30:26.252 starting I/O failed 00:30:26.252 [2024-11-19 10:58:15.829038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.252 [2024-11-19 10:58:15.829285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.829308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.829494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.829512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.829662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.829673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.829893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.829925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.830051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.830082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.830341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.830373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.830505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.830545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.830633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.830643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.830721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.830730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.830897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.830908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.831052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.831062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.831306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.831320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.831396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.831405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.831560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.831569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.831767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.831778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.831855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.831865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.252 [2024-11-19 10:58:15.832136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.252 [2024-11-19 10:58:15.832147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.252 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.832238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.832248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.832477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.832488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.832661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.832671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.832770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.832780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.833027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.833037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.833239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.833250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.833437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.833447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.833652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.833663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.833962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.833993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.834182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.834228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.834427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.834458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.834634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.834663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.834847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.834857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.835053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.835064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.835264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.835298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.835547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.835579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.835765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.835795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.836085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.836117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.836323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.836356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.836540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.836571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.836757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.836767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.836947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.836978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.837250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.837284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.837499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.837529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.837674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.837684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.837748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.837757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.838009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.838019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.838214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.838225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.838360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.838370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.838577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.838588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.838648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.838657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.838780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.838789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.838919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.838930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.838995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.839004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.839211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.839225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.839366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.253 [2024-11-19 10:58:15.839376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.253 qpair failed and we were unable to recover it. 00:30:26.253 [2024-11-19 10:58:15.839477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.839486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.839743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.839754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.839904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.839914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.840076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.840104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.840372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.840404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.840508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.840538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.840789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.840822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.841019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.841050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.841231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.841263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.841499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.841509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.841748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.841757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.841900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.841910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.842064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.842075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.842267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.842278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.842468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.842478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.842741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.842772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.842959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.842991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.843175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.843213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.843451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.843483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.843723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.843755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.844015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.844046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.844165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.844196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.844441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.844473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.844672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.844702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.844990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.845022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.845196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.845252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.845455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.845485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.845733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.845765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.846028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.846059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.846267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.846299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.846562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.846594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.846765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.846796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.846976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.847005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.847266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.847299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.847578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.847608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.847867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.847898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.848184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.848234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.848495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.254 [2024-11-19 10:58:15.848527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.254 qpair failed and we were unable to recover it. 00:30:26.254 [2024-11-19 10:58:15.848808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.848843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.849121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.849152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.849265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.849297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.849557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.849588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.849776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.849806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.850071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.850102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.850388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.850422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.850670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.850701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.850950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.850981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.851250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.851283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.851473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.851503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.851763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.851795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.852083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.852114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.852384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.852418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.852610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.852641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.852781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.852812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.853047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.853077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.853343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.853376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.853611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.853641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.853886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.853917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.854152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.854183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.854409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.854442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.854680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.854710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.854975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.855006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.855294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.855328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.855598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.855630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.855917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.855949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.856223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.856257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.856524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.856555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.856692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.856723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.856960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.856991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.857229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.857262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.857506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.857537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.857790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.857821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.858057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.858090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.858355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.858387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.858626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.255 [2024-11-19 10:58:15.858658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.255 qpair failed and we were unable to recover it. 00:30:26.255 [2024-11-19 10:58:15.858830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.858860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.859109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.859140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.859380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.859415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.859540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.859576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.859843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.859874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.860180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.860221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.860466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.860498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.860708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.860739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.860935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.860967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.861153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.861183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.861472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.861504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.861747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.861778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.862037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.862069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.862242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.862274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.862545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.862575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.862745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.862777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.863043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.863074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.863348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.863382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.863639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.863670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.863920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.863951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.864217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.864249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.864487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.864519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.864727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.864769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.865031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.865062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.865245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.865278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.865462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.865493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.865690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.865721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.865929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.865960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.866213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.866246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.866500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.866530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.866740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.866772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.256 qpair failed and we were unable to recover it. 00:30:26.256 [2024-11-19 10:58:15.866942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.256 [2024-11-19 10:58:15.866973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.867218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.867251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.867517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.867548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.867752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.867783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.868034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.868065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.868250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.868283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.868465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.868496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.868687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.868716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.868971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.869002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.869261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.869295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.869417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.869447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.869620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.869651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.869824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.869860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.870078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.870110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.870364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.870397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.870636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.870667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.870909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.870940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.871129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.871160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.871406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.871439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.871677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.871707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.871897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.871928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.872219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.872252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.872369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.872401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.872674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.872704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.872891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.872923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.873160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.873191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.873381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.873413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.873698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.873729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.874011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.874044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.874224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.874256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.874510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.874542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.874858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.874889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.875139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.875171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.875374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.875406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.875669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.875701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.875888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.875919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.876052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.876084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.876224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.257 [2024-11-19 10:58:15.876257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-19 10:58:15.876498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.876529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.876813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.876846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.877042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.877072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.877242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.877275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.877515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.877547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.877795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.877827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.878013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.878044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.878249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.878282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.878466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.878497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.878632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.878664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.878870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.878900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.879120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.879151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.879426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.879459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.879665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.879697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.879939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.879976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.880221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.880253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.880543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.880575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.880819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.880850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.881092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.881123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.881310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.881343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.881629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.881659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.881852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.881884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.882159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.882190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.882320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.882352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.882620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.882651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.882919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.882949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.883190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.883231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.883403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.883435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.883629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.883659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.883929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.883961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.884151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.884182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.884390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.884423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.884675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.884705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.884888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.884920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.885181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.885223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.885470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.885501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.885703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.258 [2024-11-19 10:58:15.885734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-19 10:58:15.886001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.886032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.886273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.886307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.886442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.886473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.886712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.886744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.887012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.887044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.887330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.887363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.887639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.887670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.887909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.887941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.888116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.888147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.888439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.888472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.888724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.888755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.888959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.888990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.889183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.889224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.889478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.889510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.889798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.889829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.890103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.890135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.890351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.890384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.890655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.890692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.890972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.891004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.891198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.891238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.891527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.891559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.891822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.891853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.892146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.892177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.892381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.892414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.892683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.892713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.892955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.892986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.893257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.893290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.893415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.893445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.893711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.893743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.893868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.893898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.894136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.894169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.894476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.894508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.894711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.894743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.894892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.894921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.895172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.895214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.895472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.895503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.895687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.895717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-19 10:58:15.895999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.259 [2024-11-19 10:58:15.896030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.896275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.896308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.896622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.896654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.896885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.896917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.897131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.897164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.897363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.897395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.897635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.897666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.897948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.898034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.898328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.898367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.898638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.898671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.898942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.898975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.899225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.899258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.899457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.899490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.899623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.899655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.899794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.899826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.900053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.900085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.900300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.900335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.900511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.900542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.900726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.900758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.900961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.900992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.901254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.901296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.901516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.901548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.901718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.901749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.901953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.901985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.902124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.902155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.902384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.902416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.902681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.902712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.902829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.902861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.903128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.903159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.903289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.903322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.903588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.903621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.903807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.903839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.904110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.904141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.904435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.260 [2024-11-19 10:58:15.904470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.260 qpair failed and we were unable to recover it. 00:30:26.260 [2024-11-19 10:58:15.904734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.904766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.905071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.905104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.905326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.905359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.905549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.905580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.905768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.905798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.906084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.906116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.906322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.906354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.906627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.906658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.906813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.906844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.907058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.907090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.907289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.907321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.907524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.907555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.907757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.907789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.907983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.908015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.908273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.908307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.908553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.908584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.908850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.908882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.909171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.909217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.909393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.909425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.909723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.909754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.910004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.910035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.910273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.910306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.910563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.910594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.910832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.910864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.911116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.911147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.911451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.911483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.911626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.911664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.911905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.911937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.912214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.912247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.912521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.912553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.912747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.912778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.913018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.913049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.913231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.913263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.913451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.913482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.913760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.913792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.914048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.914080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.914282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.914315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.261 [2024-11-19 10:58:15.914538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.261 [2024-11-19 10:58:15.914569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.261 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.914820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.914851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.915035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.915066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.915364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.915398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.915525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.915556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.915759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.915791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.915987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.916019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.916247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.916281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.916579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.916611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.916898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.916930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.917196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.917239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.917548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.917580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.917760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.917791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.918048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.918081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.918365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.918399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.918678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.918709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.918994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.919027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.919304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.919338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.919561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.919593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.919860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.919891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.920149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.920180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.920480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.920513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.920662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.920694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.920939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.920970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.921265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.921299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.921514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.921546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.921672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.921705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.921974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.922006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.922187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.922232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.922494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.922532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.922722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.922755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.922948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.922980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.923176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.923221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.923418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.923451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.923706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.923740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.923960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.923992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.924181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.924235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.924490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.262 [2024-11-19 10:58:15.924522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.262 qpair failed and we were unable to recover it. 00:30:26.262 [2024-11-19 10:58:15.924720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.924752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.924998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.925030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.925228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.925261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.925482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.925514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.925646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.925678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.925992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.926025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.926223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.926276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.926538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.926570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.926859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.926891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.927093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.927129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.927322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.927356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.927550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.927582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.927779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.927812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.928080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.928113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.928344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.928380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.928605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.928636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.928852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.928885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.929073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.929105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.929224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.929263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.929511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.929543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.929747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.929778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.930051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.930082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.930352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.930388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.930540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.930572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.930777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.930809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.931076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.931108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.931366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.931400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.931579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.931611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.931807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.931840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.932109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.932142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.932423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.932457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.932665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.932697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.932965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.932999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.933275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.933313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.933442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.933474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.933670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.933702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.933844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.933877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-11-19 10:58:15.934142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.263 [2024-11-19 10:58:15.934174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.934387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.934423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.934695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.934728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.934905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.934937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.935115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.935149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.935361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.935394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.935601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.935633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.935778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.935811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.936071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.936103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.936282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.936317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.936566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.936598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.936858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.936891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.937187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.937233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.937361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.937393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.937599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.937631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.937899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.937931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.938126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.938158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.938350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.938384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.938512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.938544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.938720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.938754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.939059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.939091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.939222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.939261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.939509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.939541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.939739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.939772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.939968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.940000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.940177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.940243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.940508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.940539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.940667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.940699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.941022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.941055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.941233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.941267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.941515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.941547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.941757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.941789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.942061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.942094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.942403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.942438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.942645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.942678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.942937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.942969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.943148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.943180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.943386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.943419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-11-19 10:58:15.943684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.264 [2024-11-19 10:58:15.943717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.944006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.944038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.944347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.944380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.944512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.944545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.944748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.944780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.945051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.945085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.945192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.945237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.945487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.945520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.945683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.945715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.946008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.946041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.946321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.946356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.946640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.946672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.946882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.946914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.947165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.947197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.947471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.947504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.947706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.947738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.947955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.947987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.948122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.948157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.948440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.948475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.948749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.948781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.949068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.949101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.949385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.949419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.949637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.949669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.949895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.949935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.950220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.950253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.950388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.950420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.950595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.950627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.950802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.950835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.951028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.951061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.951337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.951371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.951526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.951557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.951753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.951786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.951988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.952020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.952291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.265 [2024-11-19 10:58:15.952325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-11-19 10:58:15.952563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.952595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.952813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.952846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.953115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.953147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.953385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.953419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.953667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.953701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.954001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.954032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.954231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.954265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.954515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.954552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.954778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.954809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.955059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.955091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.955341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.955389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.955594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.955626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.955762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.955796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.956076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.956109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.956348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.956381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.956656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.956688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.956890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.956922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.957053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.957085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.957290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.957325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.957533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.957566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.957699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.957732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.957910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.957942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.958143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.958176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.958484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.958516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.958771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.958803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.958995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.959028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.959322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.959356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.959585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.959617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.959816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.959849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.960125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.960169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.960432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.960467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.960730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.960762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.961056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.961089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.961376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.961411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.961608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.961641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.961903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.961936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.962241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.266 [2024-11-19 10:58:15.962274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.266 qpair failed and we were unable to recover it. 00:30:26.266 [2024-11-19 10:58:15.962535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.962565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.962790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.962821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.963092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.963122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.963302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.963335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.963563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.963593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.963797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.963827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.964105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.964134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.964413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.964445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.964654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.964684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.964948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.964979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.965272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.965305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.965600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.965629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.965850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.965880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.966130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.966160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.966354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.966385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.966534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.966563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.966711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.966742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.966977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.967008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.967308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.967343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.967582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.967614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.967935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.967967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.968248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.968283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.968559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.968591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.968800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.968832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.969110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.969142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.969418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.969451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.969742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.969775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.970049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.970081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.970245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.970278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.970553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.970584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.970750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.970783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.970990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.971021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.971218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.971259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.971456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.971490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.971707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.971739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.971943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.971974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.972253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.972286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.267 [2024-11-19 10:58:15.972449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.267 [2024-11-19 10:58:15.972482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.267 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.972713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.972746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.972948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.972979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.973185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.973229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.973437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.973469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.973664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.973696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.974016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.974048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.974244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.974278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.974416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.974447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.974647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.974680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.974928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.974961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.975222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.975256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.975384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.975416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.975599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.975632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.975848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.975879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.976158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.976190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.976501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.976534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.976789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.976820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.977024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.977056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.977313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.977349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.977649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.977681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.977950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.977983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.978212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.978246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.978448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.978480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.978702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.978734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.978952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.978985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.979236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.979269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.979532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.979564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.979836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.979869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.980072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.980104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.980342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.980377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.980604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.980636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.980923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.980956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.981236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.981271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.981519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.981550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.981736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.981774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.981990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.982022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.982295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.982329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.268 qpair failed and we were unable to recover it. 00:30:26.268 [2024-11-19 10:58:15.982458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.268 [2024-11-19 10:58:15.982491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.982703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.982736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.983033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.983066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.983199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.983240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.983465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.983497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.983697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.983730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.984027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.984059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.984336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.984369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.984583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.984614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.984855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.984889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.985140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.985172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.985396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.985429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.985687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.985718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.986029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.986061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.986346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.986380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.986658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.986690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.986912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.986944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.987180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.987223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.987378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.987410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.987681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.987712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.987908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.987941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.988236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.988268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.988460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.988492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.988699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.988732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.988966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.988998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.989322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.989355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.989608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.989641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.989952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.989985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.990251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.990285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.990492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.990525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.990681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.990714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.990979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.991011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.991213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.991247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.269 [2024-11-19 10:58:15.991383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.269 [2024-11-19 10:58:15.991415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.269 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.991544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.991577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.991846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.991878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.992138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.992170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.992471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.992510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.992741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.992774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.992966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.992999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.993193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.993238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.993445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.993477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.993693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.993725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.993955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.993987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.994242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.994276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.994477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.994509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.994646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.994679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.994810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.994842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.995098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.995131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.995391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.995425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.995615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.995647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.995868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.995900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.996011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.996044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.996177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.996228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.996424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.996457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.996611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.996644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.996775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.996807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.997077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.997109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.997339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.997372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.997560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.997591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.997747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.997779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.997983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.998015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.998273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.998307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.998462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.998494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.998696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.998729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.998936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.998967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.999280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.999313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.999567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.999600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:15.999931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:15.999963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:16.000145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:16.000177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:16.000460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.270 [2024-11-19 10:58:16.000493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.270 qpair failed and we were unable to recover it. 00:30:26.270 [2024-11-19 10:58:16.000738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.000771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.000925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.000958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.001139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.001171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.001462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.001496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.001623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.001655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.001804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.001837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.002117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.002154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.002312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.002345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.002494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.002528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.002758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.002790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.002987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.003021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.003278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.003312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.003460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.003493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.003642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.003674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.003907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.003939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.004194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.004236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.004382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.004415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.004672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.004705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.004908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.004939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.005142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.005174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.005396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.005428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.005636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.005668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.005954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.005986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.006186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.006230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.006379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.006410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.006551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.006584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.006778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.006811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.007079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.007112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.007320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.007354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.007608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.007640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.007790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.007822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.007973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.008004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.008191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.008254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.008522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.008625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.008866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.008905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.009131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.009164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.009459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.009494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.009652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.271 [2024-11-19 10:58:16.009686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.271 qpair failed and we were unable to recover it. 00:30:26.271 [2024-11-19 10:58:16.009817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.009850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.009971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.010004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.010313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.010347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.010554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.010587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.010807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.010840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.010972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.011005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.011213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.011246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.011459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.011492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.011749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.011791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.011997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.012030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.012292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.012327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.012536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.012570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.012778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.012810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.013069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.013102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.013362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.013396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.013616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.013648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.013861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.013893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.014093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.014125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.014347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.014382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.014567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.014599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.014879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.014912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.015120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.015152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.015454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.015488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.015737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.015770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.016050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.016082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.016355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.016390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.016624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.016656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.016937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.016969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.017193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.017234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.017398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.017430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.017625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.017657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.017963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.017996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.018193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.018235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.018506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.018540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.018684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.018717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.019035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.019068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.019374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.019408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.272 [2024-11-19 10:58:16.019633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.272 [2024-11-19 10:58:16.019665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.272 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.019940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.019973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.020174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.020219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.020425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.020459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.020668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.020702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.021039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.021073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.021304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.021340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.021546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.021580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.021877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.021910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.022095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.022128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.022402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.022438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.022593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.022637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.022849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.022883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.023071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.023105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.023332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.023366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.023520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.023552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.023823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.023857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.024015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.024047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.024316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.024352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.024507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.024539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.024797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.024831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.025113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.025147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.025399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.025435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.025623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.025657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.025878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.025911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.026198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.026256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.026471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.026504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.026640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.026673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.026830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.026862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.027121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.027154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.027299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.027333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.027490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.027523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.273 [2024-11-19 10:58:16.027776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.273 [2024-11-19 10:58:16.027809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.273 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.028083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.028117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.028345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.028379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.028527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.028561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.028713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.028746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.029076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.029109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.029389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.029467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.029648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.029684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.029901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.029935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.030224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.030258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.030410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.030442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.030652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.030685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.031055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.031087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.031295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.031328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.031524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.031555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.031760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.031793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.031994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.032024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.032228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.032262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.032459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.032492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.032697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.032739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.032952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.032983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.033129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.551 [2024-11-19 10:58:16.033160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.551 qpair failed and we were unable to recover it. 00:30:26.551 [2024-11-19 10:58:16.033312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.033345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.033627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.033659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.033976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.034008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.034220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.034255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.034445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.034478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.034729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.034761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.034993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.035025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.035233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.035268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.035416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.035448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.035642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.035674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.035910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.035942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.036220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.036253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.036402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.036435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.036694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.036727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.037005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.037036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.037358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.037393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.037530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.037562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.037713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.037747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.038113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.038145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.038307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.038341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.038477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.038510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.038647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.038679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.038800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.038831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.039092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.039123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.039272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.039312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.039466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.039498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.039658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.039691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.039893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.039925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.040154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.040187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.040381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.040415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.040553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.040586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.040723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.040755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.040889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.040921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.041216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.041249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.041452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.041485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.041694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.041727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.041937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.041970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.552 qpair failed and we were unable to recover it. 00:30:26.552 [2024-11-19 10:58:16.042109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.552 [2024-11-19 10:58:16.042149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.042399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.042433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.042551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.042585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.042834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.042866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.043145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.043178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.043349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.043383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.043533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.043565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.043714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.043746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.044087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.044120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.044378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.044413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.044621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.044654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.044813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.044845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.045127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.045160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.045388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.045421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.045623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.045658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.045869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.045902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.046180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.046224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.046436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.046468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.046674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.046706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.046928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.046960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.047166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.047198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.047400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.047433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.047628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.047661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.047893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.047926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.048070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.048104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.048334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.048368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.048526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.048558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.048862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.048899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.049136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.049167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.049381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.049414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.049620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.049654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.049904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.049937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.050082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.050113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.050413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.050447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.050727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.050760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.050996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.051027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.051245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.553 [2024-11-19 10:58:16.051278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.553 qpair failed and we were unable to recover it. 00:30:26.553 [2024-11-19 10:58:16.051489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.051521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.051663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.051695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.051903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.051934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.052082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.052122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.052315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.052348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.052557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.052589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.052739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.052772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.052923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.052954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.053221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.053255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.053455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.053489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.053703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.053735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.053996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.054029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.054259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.054292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.054428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.054460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.054668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.054701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.054942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.054973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.055238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.055272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.055509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.055541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.055746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.055779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.056003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.056034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.056233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.056268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.056419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.056451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.056646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.056679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.056990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.057022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.057174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.057212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.057355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.057387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.057568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.057602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.057788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.057819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.058025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.058058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.058258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.058294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.058504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.058544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.058773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.058804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.059058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.059089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.059309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.059343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.059493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.059526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.059734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.554 [2024-11-19 10:58:16.059767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.554 qpair failed and we were unable to recover it. 00:30:26.554 [2024-11-19 10:58:16.060081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.060114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.060325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.060358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.060587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.060619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.060766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.060799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.061053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.061084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.061312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.061346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.061549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.061580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.061846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.061878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.062146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.062179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.062478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.062511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.062770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.062802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.063057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.063089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.063374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.063408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.063605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.063636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.063778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.063810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.064099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.064132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.064396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.064429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.064659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.064691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.064844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.064877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.065127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.065158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.065370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.065404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.065618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.065651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.065971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.066003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.066219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.066252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.066581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.066612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.066847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.066880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.067138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.067170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.067363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.067397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.067544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.067575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.067715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.067748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.068067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.068099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.068324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.068358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.068502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.068535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.068744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.068776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.068989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.069026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.069317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.069376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.555 [2024-11-19 10:58:16.069586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.555 [2024-11-19 10:58:16.069619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.555 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.069761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.069794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.070090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.070121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.070335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.070369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.070567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.070598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.070808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.070841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.071089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.071121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.071276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.071310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.071463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.071495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.071770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.071802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.072005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.072037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.072296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.072330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.072611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.072643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.072984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.073016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.073301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.073334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.073480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.073511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.073698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.073731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.073972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.074004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.074265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.074299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.074529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.074561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.074817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.074850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.075148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.075180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.075330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.075363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.075563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.075596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.075911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.075943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.076232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.076267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.076424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.076456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.076678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.076710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.077000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.077031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.077249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.077283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.077417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.077449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.077675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.077708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.556 qpair failed and we were unable to recover it. 00:30:26.556 [2024-11-19 10:58:16.077914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.556 [2024-11-19 10:58:16.077946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.078131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.078164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.078382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.078414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.078597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.078629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.078874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.078905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.079216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.079251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.079503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.079541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.079735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.079767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.080019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.080052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.080357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.080390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.080594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.080626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.080829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.080862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.081067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.081098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.081313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.081347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.081501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.081533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.081861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.081894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.082174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.082212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.082350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.082383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.082593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.082624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.082770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.082803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.083088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.083121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.083272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.083305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.083494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.083527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.083688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.083720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.083924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.083955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.084240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.084274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.084415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.084447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.084652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.084684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.084904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.084936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.085136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.085168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.085387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.085420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.085602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.085635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.085846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.085879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.086158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.086191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.086450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.086482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.086631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.086664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.086970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.087002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.087199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.557 [2024-11-19 10:58:16.087244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.557 qpair failed and we were unable to recover it. 00:30:26.557 [2024-11-19 10:58:16.087386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.087417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.087582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.087615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.087934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.087965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.088195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.088237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.088403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.088436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.088618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.088650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.088892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.088924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.089150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.089183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.089448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.089485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.089720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.089753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.090025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.090056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.090273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.090308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.090525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.090557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.090812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.090844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.091062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.091094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.091317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.091351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.091538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.091570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.091728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.091760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.091952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.091984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.092221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.092254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.092390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.092422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.092697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.092730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.092984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.093015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.093277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.093311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.093505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.093537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.093679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.093711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.094014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.094046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.094268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.094301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.094428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.094459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.094593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.094626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.094825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.094856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.095115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.095147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.095412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.095446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.095581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.095612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.095796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.095829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.096105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.096138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.096269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.096303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.096455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.558 [2024-11-19 10:58:16.096486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.558 qpair failed and we were unable to recover it. 00:30:26.558 [2024-11-19 10:58:16.096682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.096714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.096945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.096976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.097260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.097293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.097428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.097461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.097640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.097672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.097814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.097846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.098165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.098198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.098490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.098521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.098666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.098698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.099010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.099042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.099304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.099346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.099536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.099567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.099765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.099798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.099996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.100028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.100285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.100318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.100469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.100501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.100655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.100688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.100833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.100865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.101052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.101085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.101352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.101387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.101569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.101601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.101759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.101792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.102090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.102121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.102392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.102425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.102683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.102715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.102972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.103004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.103259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.103294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.103442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.103474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.103696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.103728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.104028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.104059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.104341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.104375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.104580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.104612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.104810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.104842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.105140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.105173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.105388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.105420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.105601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.105633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.105791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.559 [2024-11-19 10:58:16.105824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.559 qpair failed and we were unable to recover it. 00:30:26.559 [2024-11-19 10:58:16.106060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.106091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.106308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.106342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.106542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.106574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.106850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.106881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.107187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.107243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.107383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.107415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.107569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.107601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.107744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.107776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.108039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.108070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.108277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.108311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.108502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.108533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.108645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.108676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.108990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.109022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.109210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.109249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.109506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.109538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.109738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.109770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.110045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.110077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.110359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.110391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.110645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.110676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.110987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.111019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.111235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.111269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.111423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.111454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.111662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.111694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.111848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.111879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.112170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.112209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.112353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.112385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.112660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.112691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.112907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.112940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.113146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.113179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.113341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.113372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.113651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.113684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.113905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.113936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.114151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.560 [2024-11-19 10:58:16.114183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.560 qpair failed and we were unable to recover it. 00:30:26.560 [2024-11-19 10:58:16.114450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.114482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.114630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.114662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.114946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.114978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.115258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.115292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.115484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.115516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.115716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.115748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.115985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.116018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.116277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.116312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.116583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.116616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.116798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.116831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.117096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.117129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.117413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.117447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.117696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.117728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.118004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.118036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.118246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.118279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.118477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.118508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.118651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.118683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.118938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.118969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.119229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.119262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.119477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.119510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.119691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.119728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.119946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.119978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.120263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.120297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.120437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.120468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.120671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.120704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.120851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.120882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.121080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.121112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.121331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.121366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.121552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.121582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.121840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.121872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.122069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.122102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.122361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.122394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.122692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.122724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.122960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.122994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.123239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.123272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.123459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.123492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.123700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.561 [2024-11-19 10:58:16.123731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.561 qpair failed and we were unable to recover it. 00:30:26.561 [2024-11-19 10:58:16.124025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.124058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.124288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.124321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.124502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.124534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.124764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.124798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.125085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.125117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.125310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.125343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.125538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.125569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.125765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.125797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.125999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.126029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.126307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.126341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.126554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.126586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.126790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.126822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.127156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.127189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.127448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.127481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.127719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.127751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.128037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.128069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.128339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.128373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.128503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.128535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.128744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.128776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.129107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.129139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.129350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.129383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.129586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.129618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.129761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.129794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.130058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.130094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.130314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.130348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.130605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.130637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.130833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.130865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.131068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.131100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.131318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.131351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.131542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.131573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.131699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.131731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.132043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.132074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.132306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.132340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.132593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.132625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.132810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.132843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.133124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.133156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.133374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.133406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.562 [2024-11-19 10:58:16.133539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.562 [2024-11-19 10:58:16.133571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.562 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.133701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.133733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.134015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.134047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.134250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.134283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.134488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.134520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.134645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.134677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.134915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.134947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.135233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.135268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.135417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.135450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.135725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.135756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.136043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.136076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.136288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.136321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.136600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.136631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.136987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.137020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.137215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.137247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.137502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.137534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.137734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.137766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.137967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.137999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.138282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.138314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.138545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.138576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.138729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.138762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.139054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.139086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.139307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.139339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.139591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.139623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.139760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.139793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.140012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.140043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.140342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.140382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.140630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.140662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.140921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.140954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.141176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.141228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.141504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.141536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.141689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.141720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.141999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.142031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.142239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.142272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.142470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.142503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.142662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.142694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.142980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.143012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.563 [2024-11-19 10:58:16.143264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.563 [2024-11-19 10:58:16.143297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.563 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.143499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.143530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.143836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.143868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.144129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.144161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.144393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.144426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.144584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.144617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.144900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.144930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.145199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.145243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.145446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.145477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.145673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.145704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.145994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.146026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.146331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.146365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.146645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.146676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.146985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.147017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.147282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.147316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.147519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.147550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.147711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.147743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.148032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.148063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.148254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.148287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.148503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.148536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.148743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.148775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.148995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.149027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.149335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.149369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.149565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.149596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.149808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.149841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.150096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.150127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.150274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.150308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.150561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.150593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.150721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.150753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.150949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.150987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.151263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.151297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.151558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.151590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.151797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.151829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.152096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.152128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.152310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.152344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.152571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.152603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.152756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.152788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.153000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.153032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.564 [2024-11-19 10:58:16.153244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.564 [2024-11-19 10:58:16.153277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.564 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.153557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.153589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.153806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.153838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.153967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.153999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.154256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.154290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.154583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.154615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.154764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.154795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.155084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.155116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.155322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.155355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.155557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.155589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.155858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.155890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.156085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.156117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.156315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.156348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.156548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.156580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.156785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.156817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.157100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.157131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.157446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.157478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.157732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.157765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.158029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.158062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.158281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.158315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.158449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.158481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.158704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.158737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.158883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.158913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.159098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.159131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.159377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.159411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.159607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.159639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.159949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.159982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.160181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.160224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.160420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.160454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.160569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.160602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.160834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.160866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.161064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.161101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.161333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.161367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.161516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.161549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.565 qpair failed and we were unable to recover it. 00:30:26.565 [2024-11-19 10:58:16.161799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.565 [2024-11-19 10:58:16.161832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.162013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.162045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.162299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.162332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.162468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.162500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.162843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.162875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.163071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.163102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.163362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.163395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.163581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.163614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.163747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.163780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.164044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.164076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.164258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.164292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.164502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.164534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.164767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.164799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.164939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.164971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.165249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.165283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.165431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.165463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.165606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.165637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.165954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.165985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.166237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.166270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.166416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.166448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.166702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.166734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.167033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.167065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.167273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.167307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.167465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.167496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.167734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.167767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.168021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.168053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.168198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.168241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.168402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.168434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.168683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.168716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.168940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.168972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.169155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.169187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.169478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.169511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.169709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.169742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.170008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.170040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.170310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.170344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.170472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.170503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.170757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.170789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.171071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.566 [2024-11-19 10:58:16.171109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.566 qpair failed and we were unable to recover it. 00:30:26.566 [2024-11-19 10:58:16.171367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.171400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.171659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.171691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.171915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.171947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.172172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.172213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.172414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.172446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.172641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.172673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.172970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.173003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.173306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.173350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.173600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.173632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.173894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.173925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.174051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.174082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.174283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.174317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.174432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.174463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.174603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.174635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.174907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.174938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.175223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.175256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.175453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.175485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.175786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.175818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.176023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.176056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.176248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.176282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.176560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.176592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.176808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.176840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.177018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.177049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.177249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.177281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.177487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.177519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.177804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.177837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.178103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.178135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.178433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.178467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.178684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.178717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.178951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.178983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.179266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.179299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.179503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.179534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.179816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.179849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.180029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.180060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.180264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.180297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.180431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.180462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.180644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.567 [2024-11-19 10:58:16.180676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.567 qpair failed and we were unable to recover it. 00:30:26.567 [2024-11-19 10:58:16.180927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.180959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.181246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.181279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.181484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.181522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.181659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.181691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.182011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.182043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.182265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.182298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.182576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.182606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.182786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.182818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.183014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.183046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.183258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.183291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.183547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.183578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.183892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.183925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.184178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.184218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.184424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.184456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.184724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.184755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.185056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.185087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.185372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.185406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.185614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.185647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.185849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.185882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.186157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.186188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.186334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.186366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.186508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.186540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.186793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.186825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.187077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.187108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.187362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.187396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.187589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.187621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.187929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.187961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.188257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.188290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.188496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.188528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.188679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.188711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.188927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.188958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.189151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.189183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.189388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.189421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.189672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.189703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.189930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.189962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.190247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.190280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.190464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.190496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.568 [2024-11-19 10:58:16.190695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.568 [2024-11-19 10:58:16.190727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.568 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.190968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.190999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.191255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.191288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.191547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.191579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.191770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.191801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.191990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.192023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.192283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.192316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.192571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.192603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.192808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.192840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.193122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.193154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.193315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.193347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.193560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.193592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.193800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.193832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.194101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.194132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.194337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.194370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.194566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.194597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.194827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.194859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.195054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.195085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.195314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.195348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.195608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.195640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.195954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.195986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.196136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.196167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.196381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.196414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.196618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.196649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.196842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.196874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.197026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.197057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.197245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.197279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.197486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.197518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.197796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.197828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.198111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.198143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.198427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.198461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.198717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.198749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.199056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.199094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.199288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.199322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.569 [2024-11-19 10:58:16.199518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.569 [2024-11-19 10:58:16.199549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.569 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.199738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.199770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.199880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.199911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.200058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.200091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.200350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.200383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.200579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.200611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.200956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.200989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.201213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.201245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.201500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.201532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.201859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.201890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.202088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.202119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.202374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.202407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.202582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.202614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.202840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.202872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.203018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.203049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.203251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.203285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.203518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.203549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.203746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.203779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.203995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.204026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.204228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.204261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.204471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.204503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.204642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.204674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.204878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.204911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.205169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.205209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.205416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.205447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.205731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.205764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.206044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.206075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.206282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.206316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.206591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.206622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.206824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.206857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.207052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.207083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.207231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.207264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.207489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.207520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.207668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.207700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.207838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.207870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.208066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.208099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.208253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.570 [2024-11-19 10:58:16.208288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.570 qpair failed and we were unable to recover it. 00:30:26.570 [2024-11-19 10:58:16.208436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.208468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.208744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.208788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.208995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.209026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.209241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.209275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.209482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.209513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.209765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.209797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.209992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.210023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.210253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.210287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.210544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.210576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.210712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.210744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.211030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.211063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.211220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.211253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.211531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.211563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.211839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.211871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.212150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.212182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.212494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.212528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.212720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.212751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.213017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.213050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.213329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.213362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.213494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.213526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.213708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.213739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.214061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.214092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.214360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.214393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.214596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.214628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.214760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.214792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.215017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.215048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.215248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.215281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.215427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.215458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.215740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.215772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.215961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.215993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.216278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.216310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.216520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.216552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.216818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.216850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.217104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.217135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.217414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.217448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.217649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.217680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.217931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.217963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.571 [2024-11-19 10:58:16.218107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.571 [2024-11-19 10:58:16.218139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.571 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.218445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.218479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.218624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.218655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.218850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.218882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.219188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.219252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.219483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.219516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.219794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.219824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.220028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.220060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.220320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.220355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.220560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.220592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.220797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.220829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.221024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.221056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.221305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.221339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.221533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.221564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.221692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.221725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.222043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.222076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.222282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.222315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.222521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.222553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.222768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.222800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.223057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.223088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.223370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.223403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.223610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.223642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.223844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.223877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.224153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.224184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.224323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.224356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.224503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.224534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.224672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.224704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.224933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.224966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.225115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.225147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.225439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.225471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.225719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.225752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.226047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.226079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.226363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.226398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.226630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.226661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.226811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.226843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.227073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.227104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.227298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.227332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.572 [2024-11-19 10:58:16.227530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.572 [2024-11-19 10:58:16.227561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.572 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.227867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.227899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.228116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.228148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.228361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.228394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.228588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.228620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.228901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.228933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.229128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.229159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.229350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.229389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.229580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.229611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.229745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.229777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.230054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.230086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.230272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.230306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.230524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.230555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.230701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.230733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.230944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.230975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.231275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.231309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.231445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.231476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.231678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.231710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.231932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.231964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.232189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.232230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.232413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.232444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.232702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.232735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.232959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.232991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.233256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.233289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.233433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.233464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.233670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.233703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.233853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.233884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.234073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.234105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.234362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.234396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.234602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.234634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.234777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.234809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.235111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.235143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.235352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.235385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.235636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.235668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.235968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.236000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.236274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.236307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.236454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.236486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.236799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.236831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.573 [2024-11-19 10:58:16.237037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.573 [2024-11-19 10:58:16.237069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.573 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.237262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.237296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.237480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.237511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.237642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.237675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.238036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.238068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.238289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.238322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.238532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.238564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.238814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.238847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.238983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.239015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.239232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.239272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.239396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.239428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.239703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.239734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.239980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.240012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.240195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.240236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.240372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.240403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.240634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.240667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.240785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.240816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.241035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.241068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.241359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.241392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.241526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.241556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.241705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.241737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.242016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.242047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.242192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.242232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.242368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.242400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.242558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.242590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.242857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.242889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.243114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.243146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.243382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.243415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.243642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.243674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.243903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.243934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.244186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.244232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.244434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.244465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.244656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.244688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.244890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.244921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.574 qpair failed and we were unable to recover it. 00:30:26.574 [2024-11-19 10:58:16.245199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.574 [2024-11-19 10:58:16.245243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.245393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.245424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.245656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.245688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.245921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.245954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.246179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.246221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.246502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.246533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.246680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.246713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.246973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.247004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.247272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.247305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.247455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.247486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.247815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.247847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.248114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.248146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.248446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.248478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.248789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.248820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.249065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.249097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.249303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.249342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.249584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.249615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.249821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.249853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.250128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.250160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.250447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.250479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.250689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.250720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.251002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.251034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.251307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.251341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.251638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.251670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.251965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.251996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.252183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.252223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.252424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.252456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.252665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.252696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.252824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.252856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.253060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.253092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.253319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.253352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.253559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.253590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.253853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.253885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.254174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.254212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.254420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.254451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.254587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.254618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.254908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.254941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.575 [2024-11-19 10:58:16.255220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.575 [2024-11-19 10:58:16.255253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.575 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.255480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.255513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.255765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.255797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.255945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.255977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.256232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.256265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.256439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.256471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.256678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.256709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.256935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.256967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.257275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.257309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.257569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.257600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.257803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.257835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.257975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.258007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.258286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.258318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.258576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.258608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.258767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.258799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.258993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.259025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.259295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.259328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.259483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.259514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.259755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.259798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.260079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.260110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.260269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.260303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.260436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.260467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.260619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.260651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.260939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.260972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.261100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.261131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.261261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.261294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.261497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.261529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.261686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.261717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.261942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.261973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.262176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.262216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.262346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.262377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.262529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.262561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.262767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.262798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.262997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.263029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.263349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.263383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.263676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.263708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.263979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.264011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.576 [2024-11-19 10:58:16.264248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.576 [2024-11-19 10:58:16.264281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.576 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.264418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.264451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.264608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.264639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.264831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.264863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.265068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.265100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.265355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.265389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.265619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.265650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.265969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.266000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.266232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.266266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.266472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.266503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.266703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.266735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.266955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.266986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.267183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.267223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.267432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.267464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.267720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.267751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.267953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.267984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.268188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.268231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.268393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.268425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.268620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.268652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.268779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.268811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.269056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.269089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.269314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.269355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.269510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.269541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.269752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.269784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.269988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.270020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.270212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.270245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.270438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.270469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.270674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.270706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.271011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.271042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.271353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.271387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.271579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.271611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.271873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.271905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.272247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.272280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.272484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.272517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.272665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.272697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.272930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.272962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.273239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.273273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.273478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.273511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.577 qpair failed and we were unable to recover it. 00:30:26.577 [2024-11-19 10:58:16.273717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.577 [2024-11-19 10:58:16.273749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.273943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.273975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.274169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.274209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.274358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.274389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.274548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.274581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.274871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.274902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.275046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.275079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.275368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.275401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.275679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.275710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.275982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.276014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.276244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.276279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.276440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.276471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.276674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.276706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.276965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.276997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.277229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.277262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.277471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.277506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.277714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.277747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.278049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.278080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.278359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.278394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.278549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.278581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.278720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.278752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.278966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.278997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.279130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.279163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.279444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.279483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.279617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.279649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.279948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.279980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.280186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.280239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.280398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.280429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.280639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.280671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.280999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.281031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.281275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.281309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.281566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.281597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.281757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.281789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.281972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.282003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.282199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.282242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.282390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.282422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.282624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.282656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.578 [2024-11-19 10:58:16.282934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.578 [2024-11-19 10:58:16.282965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.578 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.283264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.283299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.283440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.283471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.283693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.283726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.283955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.283987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.284186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.284244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.284500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.284531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.284688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.284720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.284851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.284883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.285178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.285219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.285422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.285453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.285662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.285694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.286054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.286084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.286317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.286351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.286559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.286591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.286798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.286830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.286957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.286988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.287265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.287299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.287500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.287532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.287655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.287687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.288000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.288031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.288297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.288331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.288634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.288665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.288971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.289003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.289236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.289270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.289485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.289516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.289722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.289761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.290066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.290098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.290299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.290331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.290639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.290671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.290891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.290923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.291155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.291186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.291409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.291441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.291653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.579 [2024-11-19 10:58:16.291685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.579 qpair failed and we were unable to recover it. 00:30:26.579 [2024-11-19 10:58:16.292040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.292070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.292283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.292317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.292539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.292570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.292731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.292764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.292986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.293019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.293229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.293261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.293449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.293481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.293681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.293714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.294051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.294082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.294273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.294306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.294472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.294503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.294794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.294826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.295102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.295134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.295380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.295414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.295670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.295702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.295854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.295886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.296165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.296195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.296483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.296515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.296676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.296708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.296914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.296946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.297228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.297262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.297455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.297487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.297795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.297826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.298008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.298040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.298317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.298350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.298570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.298602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.298756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.298789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.298998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.299029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.299301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.299334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.299538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.299570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.299855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.299887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.300143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.300174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.300407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.300446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.300651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.300683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.300881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.300913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.301112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.301145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.301423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.301456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-11-19 10:58:16.301666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.580 [2024-11-19 10:58:16.301698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.301897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.301930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.302186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.302229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.302502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.302534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.302678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.302710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.303022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.303053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.303280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.303314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.303516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.303548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.303757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.303789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.304007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.304039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.304246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.304280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.304486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.304517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.304711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.304743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.304864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.304896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.305170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.305218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.305476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.305509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.305732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.305765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.306033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.306063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.306301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.306334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.306591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.306623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.306853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.306885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.307142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.307173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.307426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.307460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.307643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.307675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.307965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.307996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.308310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.308343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.308571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.308603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.308877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.308909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.309111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.309143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.309364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.309398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.309524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.309555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.309769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.309801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.309997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.310028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.310242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.310275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.310477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.310508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.310781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.310824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.311097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.311128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-11-19 10:58:16.311282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.581 [2024-11-19 10:58:16.311316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.311615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.311649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.311915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.311946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.312157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.312189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.312407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.312439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.312593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.312623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.312929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.312961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.313243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.313277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.313531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.313564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.313691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.313724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.313937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.313967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.314108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.314140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.314387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.314419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.314613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.314645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.314929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.314960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.315180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.315220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.315361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.315393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.315598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.315630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.315956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.315989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.316267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.316300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.316577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.316609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.316933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.316966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.317161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.317192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.317405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.317438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.317626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.317657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.317862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aaaf0 is same with the state(6) to be set 00:30:26.582 [2024-11-19 10:58:16.318337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.318418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.318656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.318692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.318958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.318991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.319222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.319257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.319463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.319496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.319725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.319758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.320010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.320042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.320357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.320391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.320615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.320648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.320779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.320811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.321073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.321107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.321254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.321288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.582 [2024-11-19 10:58:16.321525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.582 [2024-11-19 10:58:16.321559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.582 qpair failed and we were unable to recover it. 00:30:26.583 [2024-11-19 10:58:16.321716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.583 [2024-11-19 10:58:16.321750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.583 qpair failed and we were unable to recover it. 00:30:26.583 [2024-11-19 10:58:16.322041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.583 [2024-11-19 10:58:16.322074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.583 qpair failed and we were unable to recover it. 00:30:26.583 [2024-11-19 10:58:16.322380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.583 [2024-11-19 10:58:16.322415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.583 qpair failed and we were unable to recover it. 00:30:26.583 [2024-11-19 10:58:16.322609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.583 [2024-11-19 10:58:16.322642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.583 qpair failed and we were unable to recover it. 00:30:26.583 [2024-11-19 10:58:16.322836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.583 [2024-11-19 10:58:16.322868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.583 qpair failed and we were unable to recover it. 00:30:26.583 [2024-11-19 10:58:16.323119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.583 [2024-11-19 10:58:16.323151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.583 qpair failed and we were unable to recover it. 00:30:26.583 [2024-11-19 10:58:16.323367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.323403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.323562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.323594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.323851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.323884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.324087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.324122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.324371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.324405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.324550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.324582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.324734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.324766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.324954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.324994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.325247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.325282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.325426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.325457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.325676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.325709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.325960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.325993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.326300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.326333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.326567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.326600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.326801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.326833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.327030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.327062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.327330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.327365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.327573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.327609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.327816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.327849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.328063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.328097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.328347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.328383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.328556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.328589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.328879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.328913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.329167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.329210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.329363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.860 [2024-11-19 10:58:16.329396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.860 qpair failed and we were unable to recover it. 00:30:26.860 [2024-11-19 10:58:16.329619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.329655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.329951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.329985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.330187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.330229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.330454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.330487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.330645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.330679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.330879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.330912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.331038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.331072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.331281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.331316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.331455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.331488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.331776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.331811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.332021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.332055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.332267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.332303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.332515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.332548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.332739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.332772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.333045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.333078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.333360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.333397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.333623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.333658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.333897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.333931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.334183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.334225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.334418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.334450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.334656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.334689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.334845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.334877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.335161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.335199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.335438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.335472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.335630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.335663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.335882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.335915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.336193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.336238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.336442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.336475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.336630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.336663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.336985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.337017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.337230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.337265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.337529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.337563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.337695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.337727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.337970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.338004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.338309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.338343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.338573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.338605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.338881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.338914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.339194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.339237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.339433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.339465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.339616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.339649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.339968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.340001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.340256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.340290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.340507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.340540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.340733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.340766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.341076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.341107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.341320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.341354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.341564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.341596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.341805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.341838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.861 [2024-11-19 10:58:16.341967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.861 [2024-11-19 10:58:16.342000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.861 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.342261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.342295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.342501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.342533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.342734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.342767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.343019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.343051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.343291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.343326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.343603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.343636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.343850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.343883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.344017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.344051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.344281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.344315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.344451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.344483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.344676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.344710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.344843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.344876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.345077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.345110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.345318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.345358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.345614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.345647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.345775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.345807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.346093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.346125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.346318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.346352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.346564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.346597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.346802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.346835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.347087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.347120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.347265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.347299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.347577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.347609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.347907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.347940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.348237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.348271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.348561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.348595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.348893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.348926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.349152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.349185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.349452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.349484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.349623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.349654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.349937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.349971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.350106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.350138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.350447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.350482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.350615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.350648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.350849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.350881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.351109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.351142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.351371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.862 [2024-11-19 10:58:16.351404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.862 qpair failed and we were unable to recover it. 00:30:26.862 [2024-11-19 10:58:16.351545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.351577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.351732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.351765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.351964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.351996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.352218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.352253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.352440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.352473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.352686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.352719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.353056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.353088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.353314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.353348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.353518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.353549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.353704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.353736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.354033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.354066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.354259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.354294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.354443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.354474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.354610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.354643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.354912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.354945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.355140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.355173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.355376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.355414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.355625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.355658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.355819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.355851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.356047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.356080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.356359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.356393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.356552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.356584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.356725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.356758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.357058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.357090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.357393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.357427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.357637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.357668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.357808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.357842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.358037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.358069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.358257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.358290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.358494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.358528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.358690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.358722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.863 [2024-11-19 10:58:16.359052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.863 [2024-11-19 10:58:16.359084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.863 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.359302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.359336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.359490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.359522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.359720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.359753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.360066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.360098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.360383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.360418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.360544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.360577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.360736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.360769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.361045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.361079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.361309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.361343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.361596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.361628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.361853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.361886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.362232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.362314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.362580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.362617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.362903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.362937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.363131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.363164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.363361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.363395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.363610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.363643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.363952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.363984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.364280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.364316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.364476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.364509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.364810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.364841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.365062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.365095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.365347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.365381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.365570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.365601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.365888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.365923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.366187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.366229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.366387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.366419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.366618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.366650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.366802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.366836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.367062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.367094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.367372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.367405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.367611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.367644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.367923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.367954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.368219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.368253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.368407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.368441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.368584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.368617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.368907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.368940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.864 [2024-11-19 10:58:16.369235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.864 [2024-11-19 10:58:16.369270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.864 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.369431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.369470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.369626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.369663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.369936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.369967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.370271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.370306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.370539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.370571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.370775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.370807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.371003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.371034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.371237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.371270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.371419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.371451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.371656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.371688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.371956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.371989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.372188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.372231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.372427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.372458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.372619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.372652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.372925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.372958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.373225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.373260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.373543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.373577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.373773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.373805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.374079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.374112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.374332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.374366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.374568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.374601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.374805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.374838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.375123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.375155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.375458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.375492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.375675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.375707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.375928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.375961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.376253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.376288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.376561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.376599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.376835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.376868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.377084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.377116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.377304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.377338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.377612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.377644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.377944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.377976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.378195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.378239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.378381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.378414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.378694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.378726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.379033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.379066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.379275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.865 [2024-11-19 10:58:16.379308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.865 qpair failed and we were unable to recover it. 00:30:26.865 [2024-11-19 10:58:16.379556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.379588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.379795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.379829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.380047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.380079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.380348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.380383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.380662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.380695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.380940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.380973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.381251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.381286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.381489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.381522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.381753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.381784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.381973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.382006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.382308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.382341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.382618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.382650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.382903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.382936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.383237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.383271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.383541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.383572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.383789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.383822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.384096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.384135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.384426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.384459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.384738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.384770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.384981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.385015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.385298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.385333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.385552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.385584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.385772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.385804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.386081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.386114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.386406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.386439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.386643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.386674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.386858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.386890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.387166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.387199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.387408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.387441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.387693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.387726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.388001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.388073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.388359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.388398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.388606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.388639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.388920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.388951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.389213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.389246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.389472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.389505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.389740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.389771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.389909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.866 [2024-11-19 10:58:16.389941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.866 qpair failed and we were unable to recover it. 00:30:26.866 [2024-11-19 10:58:16.390144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.390176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.390462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.390493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.390721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.390752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.391010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.391043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.391325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.391360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.391613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.391662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.391955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.391987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.392188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.392229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.392378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.392410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.392612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.392644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.392768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.392800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.393093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.393125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.393335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.393369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.393620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.393651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.393927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.393959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.394154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.394185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.394467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.394500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.394801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.394833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.395031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.395063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.395267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.395301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.395611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.395642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.395926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.395958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.396230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.396263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.396467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.396500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.396776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.396807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.397015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.397047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.397296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.397329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.397526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.397558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.397760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.397793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.398067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.398099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.398343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.398375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.398648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.398680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.867 [2024-11-19 10:58:16.398976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.867 [2024-11-19 10:58:16.399016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.867 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.399281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.399315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.399448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.399480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.399680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.399712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.399910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.399942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.400143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.400176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.400423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.400459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.400673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.400706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.400985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.401017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.401199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.401241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.401498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.401530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.401781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.401814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.402003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.402034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.402224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.402263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.402465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.402497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.402692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.402724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.402910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.402942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.403215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.403249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.403526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.403558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.403838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.403871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.404094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.404126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.404427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.404460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.404725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.404756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.405006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.405038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.405257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.405292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.405547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.405578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.405850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.405882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.406164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.406196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.406397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.406429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.406671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.406702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.406919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.406950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.407220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.407253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.407502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.407534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.407744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.407774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.407962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.407996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.408264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.408297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.408586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.868 [2024-11-19 10:58:16.408617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.868 qpair failed and we were unable to recover it. 00:30:26.868 [2024-11-19 10:58:16.408894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.408925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.409125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.409156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.409446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.409480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.409766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.409838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.410109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.410145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.410432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.410468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.410763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.410795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.410993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.411024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.411279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.411313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.411584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.411615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.411888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.411920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.412218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.412250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.412445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.412476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.412723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.412754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.413032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.413064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.413271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.413304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.413578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.413619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.413866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.413898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.414101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.414131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.414381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.414414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.414593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.414623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.414826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.414858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.415047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.415079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.415277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.415310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.415502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.415534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.415793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.415824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.416044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.416074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.416342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.416375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.416699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.416730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.416938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.416969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.417266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.417300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.417580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.417611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.417845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.417876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.418142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.418173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.418374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.418408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.418664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.418694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.418880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.418912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.419156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.869 [2024-11-19 10:58:16.419191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.869 qpair failed and we were unable to recover it. 00:30:26.869 [2024-11-19 10:58:16.419398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.419429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.419693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.419725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.420016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.420048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.420324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.420357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.420615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.420646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.420962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.421007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.421474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.421510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.421716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.421749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.421942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.421973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.422222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.422256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.422551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.422583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.422770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.422803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.422992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.423024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.423161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.423192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.423446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.423479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.423748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.423780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.424021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.424053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.424241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.424274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.424546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.424578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.424724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.424756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.424973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.425005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.425284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.425318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.425593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.425624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.425914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.425946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.426175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.426216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.426344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.426375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.426657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.426688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.426947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.426980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.427194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.427234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.427507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.427540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.427746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.427777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.427972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.428005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.428180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.428227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.428423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.428455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.428646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.428678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.428948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.428980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.429272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.429305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.429553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.870 [2024-11-19 10:58:16.429585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.870 qpair failed and we were unable to recover it. 00:30:26.870 [2024-11-19 10:58:16.429706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.429737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.429936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.429968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.430179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.430219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.430420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.430451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.430696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.430729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.430997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.431028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.431320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.431354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.431622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.431654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.431931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.431964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.432251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.432286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.432556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.432605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.432807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.432843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.433074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.433108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.433292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.433327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.433454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.433485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.433749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.433781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.434029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.434060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.434250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.434282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.434550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.434581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.434759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.434790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.435060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.435092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.435287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.435326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.435577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.435610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.435887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.435918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.436288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.436322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.436569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.436601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.436873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.436905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.437120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.437151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.437437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.437471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.437696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.437727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.437978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.438010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.438269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.438303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.438443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.438474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.438682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.438714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.439003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.439035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.439316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.439349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.439616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.439648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.439945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.439977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.871 [2024-11-19 10:58:16.440249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.871 [2024-11-19 10:58:16.440282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.871 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.440551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.440582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.440703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.440735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.441012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.441044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.441174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.441214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.441465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.441497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.441745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.441777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.442045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.442077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.442351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.442384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.442660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.442693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.442985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.443025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.443222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.443255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.443435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.443468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.443714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.443746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.443935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.443967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.444162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.444194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.444518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.444550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.444829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.444860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.445105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.445137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.445252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.445285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.445429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.445461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.445728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.445760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.446081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.446113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.446365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.446399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.446613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.446645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.446920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.446952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.447251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.447285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.447480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.447513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.447700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.447732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.447993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.448024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.448238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.448271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.448545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.448576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.448714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.448746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.449039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.449071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.872 [2024-11-19 10:58:16.449271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.872 [2024-11-19 10:58:16.449304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.872 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.449496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.449531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.449731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.449763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.449963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.449996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.450322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.450356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.450555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.450587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.450835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.450868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.451197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.451237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.451487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.451520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.451647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.451682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.452012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.452044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.452228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.452262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.452532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.452564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.452815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.452848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.453141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.453172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.453472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.453505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.453711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.453742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.453926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.453964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.454226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.454261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.454418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.454451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.454701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.454732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.455035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.455068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.455193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.455242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.455465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.455498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.455804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.455835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.456114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.456147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.456398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.456431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.456740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.456772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.456971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.457002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.457195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.457235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.457510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.457541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.457678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.457710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.458056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.458087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.458341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.458375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.458676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.458708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.459017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.459050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.459188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.459227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.459505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.873 [2024-11-19 10:58:16.459537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.873 qpair failed and we were unable to recover it. 00:30:26.873 [2024-11-19 10:58:16.459763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.459794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.460042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.460075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.460335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.460368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.460580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.460612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.460758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.460791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.461075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.461106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.461355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.461396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.461550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.461583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.461854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.461885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.462146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.462178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.462484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.462516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.462774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.462806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.463007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.463040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.463334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.463367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.463503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.463536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.463794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.463826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.464098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.464130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.464337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.464371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.464648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.464680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.464930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.464962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.465166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.465199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.465426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.465460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.465648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.465679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.465888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.465920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.466171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.466210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.466363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.466395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.466672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.466703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.466912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.466944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.467143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.467174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.467451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.467484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.467688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.467719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.467922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.467954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.468160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.468191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.468416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.468454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.468592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.468624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.468843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.468875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.469086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.469117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.874 [2024-11-19 10:58:16.469411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.874 [2024-11-19 10:58:16.469446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.874 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.469739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.469771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.469969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.470002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.470255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.470288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.470570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.470603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.470799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.470832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.470976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.471008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.471311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.471345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.471621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.471652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.471934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.471967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.472228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.472263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.472493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.472524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.472791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.472823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.473115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.473147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.473422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.473455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.473734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.473766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.474027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.474060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.474312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.474347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.474563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.474595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.474775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.474808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.475012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.475044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.475292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.475325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.475532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.475564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.475845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.475877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.476066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.476098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.476371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.476404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.476677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.476709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.477004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.477036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.477313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.477347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.477597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.477628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.477928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.477960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.478231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.478266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.478559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.478590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.478862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.478895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.479087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.479119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.479323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.479356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.479608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.479640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.479982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.480015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.480271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.875 [2024-11-19 10:58:16.480304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.875 qpair failed and we were unable to recover it. 00:30:26.875 [2024-11-19 10:58:16.480604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.480637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.480921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.480954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.481234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.481268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.481551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.481582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.481893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.481926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.482053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.482084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.482288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.482321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.482598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.482629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.482909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.482942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.483152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.483184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.483339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.483372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.483591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.483623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.483897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.483930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.484223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.484256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.484437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.484470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.484681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.484713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.484990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.485023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.485275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.485329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.485621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.485654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.485881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.485912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.486167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.486200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.486392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.486424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.486677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.486709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.486995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.487027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.487220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.487254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.487474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.487512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.487765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.487798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.488055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.488086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.488282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.488315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.488536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.488568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.488766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.488798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.488993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.489025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.489279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.489313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.489567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.489599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.489904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.489936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.490059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.490090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.490365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.490399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.490672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.876 [2024-11-19 10:58:16.490704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.876 qpair failed and we were unable to recover it. 00:30:26.876 [2024-11-19 10:58:16.490998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.491030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.491317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.491352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.491567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.491598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.491750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.491783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.492037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.492070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.492342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.492376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.492516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.492548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.492680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.492712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.492917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.492949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.493197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.493238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.493437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.493471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.493660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.493692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.493944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.493976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.494176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.494216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.494421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.494459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.494649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.494680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.494906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.494938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.495140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.495172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.495321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.495355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.495559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.495590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.495704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.495737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.495991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.496023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.496231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.496265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.496393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.496425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.496553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.496586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.496862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.496894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.497076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.497109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.497307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.497341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.497484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.497515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.497793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.497826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.497948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.497980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.498255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.498289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.498482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.498513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.498667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.877 [2024-11-19 10:58:16.498700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.877 qpair failed and we were unable to recover it. 00:30:26.877 [2024-11-19 10:58:16.498918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.498949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.499144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.499177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.499374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.499407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.499543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.499575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.499769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.499801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.500001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.500034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.500226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.500260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.500458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.500492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.500756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.500789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.500991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.501024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.501178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.501221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.501403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.501435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.501709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.501741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.501881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.501913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.502111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.502142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.502289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.502323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.502481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.502512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.502710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.502742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.502871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.502903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.503089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.503122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.503325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.503358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.503604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.503681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.503988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.504025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.504231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.504265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.504479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.504512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.504789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.504820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.505015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.505047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.505327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.505361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.505548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.505580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.505855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.505887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.506003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.506034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.506307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.506341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.506541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.506573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.506700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.506732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.506983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.507025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.507234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.507267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.507382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.507413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.507670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.878 [2024-11-19 10:58:16.507703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.878 qpair failed and we were unable to recover it. 00:30:26.878 [2024-11-19 10:58:16.507824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.507854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.507966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.507997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.508181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.508221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.508484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.508515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.508732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.508764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.508958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.508989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.509184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.509224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.509428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.509459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.509680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.509713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.509915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.509946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.510156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.510188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.510453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.510485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.510712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.510743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.511018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.511050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.511303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.511337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.511519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.511550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.511676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.511708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.511909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.511941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.512135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.512167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.512376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.512408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.512547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.512579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.512808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.512840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.513032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.513064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.513282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.513316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.513519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.513551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.513694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.513726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.513926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.513957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.514145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.514177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.514446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.514478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.514609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.514640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.514839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.514871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.515125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.515158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.515442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.515474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.515686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.515718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.515917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.515949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.516151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.516182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.516465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.516504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.516774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.879 [2024-11-19 10:58:16.516808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.879 qpair failed and we were unable to recover it. 00:30:26.879 [2024-11-19 10:58:16.516992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.517023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.517246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.517280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.517403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.517436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.517710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.517741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.518018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.518049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.518249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.518283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.518533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.518565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.518692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.518724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.518922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.518954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.519156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.519188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.519388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.519420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.519547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.519579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.519770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.519802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.519987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.520018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.520285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.520318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.520573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.520605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.520743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.520775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.520967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.520998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.521278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.521312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.521590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.521622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.521870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.521902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.522156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.522187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.522392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.522426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.522563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.522594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.522726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.522757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.523031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.523064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.523200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.523241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.523437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.523470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.523596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.523628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.523823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.523854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.524051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.524082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.524248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.524282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.524513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.524546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.524756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.524788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.525002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.525034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.525239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.525272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.880 qpair failed and we were unable to recover it. 00:30:26.880 [2024-11-19 10:58:16.525472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.880 [2024-11-19 10:58:16.525504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.525701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.525732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.525950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.525987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.526211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.526245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.526439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.526470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.526730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.526761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.527055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.527086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.527285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.527318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.527446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.527477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.527747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.527779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.528006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.528037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.528166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.528197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.528407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.528439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.528694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.528725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.528862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.528893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.529094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.529125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.529279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.529312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.529420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.529452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.529708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.529739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.529864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.529896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.530028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.530059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.530189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.530233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.530410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.530441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.530641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.530673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.530789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.530821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.531090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.531121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.531323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.531356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.531657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.531688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.531895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.531927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.532126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.532158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.532435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.532467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.532760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.532791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.881 [2024-11-19 10:58:16.532899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.881 [2024-11-19 10:58:16.532930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.881 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.533180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.533223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.533428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.533460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.533657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.533689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.533866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.533897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.534076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.534106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.534298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.534332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.534465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.534495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.534615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.534647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.534823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.534854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.535054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.535097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.535370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.535404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.535588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.535620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.535868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.535899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.536086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.536118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.536386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.536419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.536622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.536654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.536919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.536951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.537128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.537159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.537418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.537451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.537670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.537701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.537954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.537986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.538199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.538241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.538444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.538475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.538671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.538703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.538885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.538917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.539198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.539240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.539443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.539475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.539602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.539632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.539758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.539790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.539979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.540010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.540226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.540259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.540506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.540538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.540738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.540769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.540911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.540943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.541131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.541163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.541368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.541401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.541534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.882 [2024-11-19 10:58:16.541572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.882 qpair failed and we were unable to recover it. 00:30:26.882 [2024-11-19 10:58:16.541697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.541728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.541978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.542011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.542196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.542236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.542379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.542412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.542602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.542634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.542768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.542799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.542993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.543024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.543242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.543274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.543493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.543525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.543799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.543831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.544143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.544174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.544406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.544438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.544627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.544659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.544862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.544894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.545090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.545121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.545363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.545397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.545589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.545619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.545727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.545776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.546051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.546083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.546274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.546307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.546414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.546444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.546693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.546724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.546897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.546928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.547103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.547135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.547324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.547356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.547559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.547590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.547735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.547766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.547958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.547989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.548285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.548317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.548521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.548552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.548673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.548704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.548882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.548913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.549104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.549135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.549253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.549285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.549420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.549452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.549562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.549593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.549780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.549812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.550089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.883 [2024-11-19 10:58:16.550120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.883 qpair failed and we were unable to recover it. 00:30:26.883 [2024-11-19 10:58:16.550332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.550364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.550478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.550515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.550692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.550724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.550900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.550930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.551106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.551137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.551312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.551344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.551537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.551568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.551755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.551786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.551981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.552013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.552263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.552296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.552427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.552458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.552677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.552709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.552901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.552931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.553224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.553257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.553517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.553547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.553742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.553773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.553910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.553942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.554117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.554149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.554446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.554478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.554696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.554728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.555008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.555039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.555157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.555188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.555423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.555454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.555673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.555704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.555996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.556027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.556152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.556182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.556326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.556358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.556561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.556593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.556798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.556829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.556967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.556999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.557252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.557285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.557420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.557451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.557721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.557752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.557868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.557899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.558075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.558106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.558217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.558249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.558439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.558471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.884 qpair failed and we were unable to recover it. 00:30:26.884 [2024-11-19 10:58:16.558660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.884 [2024-11-19 10:58:16.558691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.558817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.558848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.559091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.559123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.559304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.559337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.559522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.559559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.559694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.559726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.559910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.559941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.560145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.560177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.560367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.560399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.560524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.560556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.560771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.560802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.561015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.561046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.561248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.561282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.561536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.561567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.561853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.561884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.562026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.562056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.562252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.562285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.562463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.562493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.562703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.562735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.562909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.562940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.563115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.563154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.563426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.563458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.563594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.563626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.563889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.563920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.564055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.564087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.564226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.564258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.564394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.564425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.564559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.564591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.564797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.564828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.565005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.565037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.565165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.565196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.565418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.565450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.565698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.565729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.565867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.565899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.566106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.566137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.566268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.566301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.566486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.566517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.566702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.885 [2024-11-19 10:58:16.566734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.885 qpair failed and we were unable to recover it. 00:30:26.885 [2024-11-19 10:58:16.566855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.566885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.567076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.567107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.567348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.567381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.567581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.567612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.567822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.567854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.568100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.568131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.568322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.568362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.568619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.568650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.568910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.568942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.569184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.569223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.569475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.569507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.569647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.569678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.569967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.569998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.570125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.570155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.570374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.570405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.570587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.570619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.570852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.570883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.571065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.571097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.571325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.571358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.571576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.571607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.571830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.571863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.572058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.572088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.572276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.572309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.572526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.572557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.572767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.572799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.573066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.573097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.573372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.573405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.573582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.573613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.573796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.573828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.574121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.574152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.574281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.574313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.574431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.574461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.574705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.574736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.886 [2024-11-19 10:58:16.574952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.886 [2024-11-19 10:58:16.574984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.886 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.575176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.575236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.575420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.575452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.575692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.575724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.575901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.575931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.576108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.576140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.576259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.576292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.576532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.576564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.576802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.576833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.577080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.577111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.577306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.577340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.577481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.577512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.577773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.577805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.578046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.578083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.578269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.578301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.578480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.578511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.578723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.578754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.578945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.578976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.579160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.579191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.579418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.579449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.579700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.579732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.579913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.579943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.580187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.580229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.580409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.580441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.580638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.580670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.580857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.580888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.581075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.581106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.581237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.581271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.581459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.581490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.581628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.581659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.581852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.581883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.582123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.582154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.582362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.582393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.582583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.582614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.582824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.582855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.583118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.583149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.583296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.583327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.583570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.583601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.887 qpair failed and we were unable to recover it. 00:30:26.887 [2024-11-19 10:58:16.583841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.887 [2024-11-19 10:58:16.583871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.584043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.584075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.584228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.584262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.584402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.584435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.584727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.584758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.584878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.584910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.585175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.585215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.585509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.585541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.585803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.585834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.586103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.586135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.586351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.586384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.586580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.586611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.586882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.586914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.587128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.587158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.587453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.587485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.587601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.587638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.587870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.587901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.588096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.588127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.588246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.588279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.588524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.588555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.588747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.588779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.588970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.589001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.589196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.589233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.589439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.589470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.589664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.589696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.589906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.589936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.590128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.590160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.590363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.590395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.590600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.590633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.590772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.590804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.591019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.591050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.591164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.591195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.591403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.591435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.591633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.591665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.591841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.591873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.592077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.592107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.592301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.592334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.592469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.888 [2024-11-19 10:58:16.592500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.888 qpair failed and we were unable to recover it. 00:30:26.888 [2024-11-19 10:58:16.592674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.592705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.592919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.592950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.593058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.593089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.593281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.593313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.593514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.593545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.593801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.593833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.594032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.594062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.594168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.594199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.594393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.594424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.594635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.594667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.594869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.594901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.595145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.595177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.595363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.595393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.595582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.595613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.595873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.595904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.596036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.596067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.596316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.596350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.596599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.596636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.596768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.596799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.597064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.597095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.597306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.597339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.597548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.597579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.597842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.597874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.598086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.598117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.598302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.598334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.598623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.598653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.598757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.598788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.599075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.599106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.599306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.599339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.599509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.599540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.599661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.599693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.599960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.599991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.600218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.600251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.600393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.600423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.600614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.600646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.600893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.600925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.601113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.601144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.601368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.889 [2024-11-19 10:58:16.601400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.889 qpair failed and we were unable to recover it. 00:30:26.889 [2024-11-19 10:58:16.601573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.601605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.601819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.601848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.602043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.602074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.602357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.602389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.602651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.602681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.602867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.602898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.603094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.603134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.603313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.603346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.603597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.603628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.603800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.603831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.604076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.604106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.604243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.604275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.604410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.604442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.604640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.604672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.604909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.604940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.605184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.605226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.605402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.605433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.605542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.605574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.605696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.605726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.605917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.605954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.606149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.606180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.606390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.606421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.606555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.606586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.606707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.606738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.606939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.606969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.607249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.607282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.607464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.607495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.607667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.607699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.607870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.607900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.608169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.608209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.608381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.608411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.608601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.608633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.608737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.608769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.890 [2024-11-19 10:58:16.608962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.890 [2024-11-19 10:58:16.608993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.890 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.609170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.609208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.609380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.609412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.609651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.609681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.609811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.609843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.610053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.610084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.610295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.610331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.610459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.610490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.610614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.610646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.610886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.610916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.611106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.611138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.611311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.611343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.611533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.611565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.611755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.611788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.611911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.611942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.612126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.612157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.612380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.612413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.612585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.612615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.612813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.612845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.612986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.613017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.613155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.613186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.613379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.613410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.613584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.613616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.613798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.613829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.614001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.614033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.614301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.614333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.614552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.614588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.614714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.614746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.614912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.614943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.615142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.615174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.615358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.615430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.615651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.615687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.615964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.615997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.616127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.616159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.616360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.616394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.616620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.616652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.616911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.616943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.617066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.891 [2024-11-19 10:58:16.617096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.891 qpair failed and we were unable to recover it. 00:30:26.891 [2024-11-19 10:58:16.617277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.617309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.617483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.617515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.617697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.617730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.617918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.617951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.618151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.618185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.618326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.618357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.618532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.618565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.618812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.618844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.619031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.619063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.619182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.619227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.619417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.619449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.619738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.619770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.619957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.619989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.620168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.620200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.620427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.620459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.620626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.620697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.620920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.620956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.621141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.621173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.621377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.621409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.621649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.621679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.621940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.621971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.622235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.622268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.622484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.622516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.622698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.622730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.622917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.622948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.623178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.623219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.623348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.623379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.623620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.623651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.623828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.623865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.624027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.624058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.624232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.624266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.624465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.624495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.624674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.624706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.624807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.624839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.624975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.625006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.625132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.625163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.625279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.625311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.892 [2024-11-19 10:58:16.625488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.892 [2024-11-19 10:58:16.625518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.892 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.625710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.625742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.625920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.625951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.626136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.626168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.626319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.626352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.626494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.626527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.626650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.626681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.629360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.629397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.629575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.629605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.629793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.629823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.630059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.630091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.630300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.630332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.630517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.630548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.630673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.630704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.630834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.630866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.631056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.631087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.631213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.631244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.631415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.631447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:26.893 [2024-11-19 10:58:16.631608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.893 [2024-11-19 10:58:16.631678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:26.893 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.631811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.631847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.632070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.632102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.632350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.632384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.632642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.632674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.632866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.632899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.633028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.633060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.633255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.633289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.633527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.633558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.633680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.633711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.633977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.634009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.634215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.634249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.634441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.634474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.634663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.634695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.634886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.634919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.635163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.635195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.635432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.635464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.635597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.635629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.635870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.635903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.636073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.636106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.636224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.636264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.636450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.636483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.636676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.636708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.636950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.636982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.637113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.637145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.637275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.637308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.637504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.637536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.637708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.637747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.637932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.637963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.638136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.638168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.638310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.638344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.638480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.638512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.638686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.638719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.638914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.638946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.186 qpair failed and we were unable to recover it. 00:30:27.186 [2024-11-19 10:58:16.639115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.186 [2024-11-19 10:58:16.639147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.639338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.639372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.639608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.639640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.639843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.639874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.640055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.640087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.640190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.640234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.640357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.640390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.640578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.640610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.640807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.640838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.641021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.641052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.641235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.641268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.641442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.641472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.641658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.641689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.641804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.641836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.641955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.641987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.642094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.642125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.642293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.642326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.642509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.642539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.642673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.642705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.642876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.642908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.643100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.643137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.643251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.643284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.643471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.643503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.643674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.643705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.643810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.643842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.644021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.644052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.644228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.644262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.644535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.644567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.644699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.644731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.645020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.645052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.645235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.645269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.645410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.645442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.645629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.645662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.645839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.645871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.646073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.646105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.646295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.646329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.646594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.646626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.646752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.646784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.187 [2024-11-19 10:58:16.646967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.187 [2024-11-19 10:58:16.646998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.187 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.647128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.647161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.647450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.647484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.647691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.647723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.647909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.647941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.648066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.648099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.648235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.648268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.648397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.648428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.648540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.648572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.648825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.648869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.649134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.649166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.649308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.649342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.649552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.649584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.649730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.649761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.649947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.649979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.650178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.650222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.650420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.650452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.650639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.650671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.650864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.650897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.651031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.651063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.651183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.651226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.651426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.651457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.651619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.651652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.651921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.651952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.652189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.652232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.652457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.652489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.652698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.652730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.652991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.653022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.653263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.653296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.653421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.653453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.653642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.653673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.653846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.653877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.653999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.654030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.654232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.654265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.654376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.654408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.654594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.654626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.654816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.654847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.655063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.655096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.188 [2024-11-19 10:58:16.655342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.188 [2024-11-19 10:58:16.655376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.188 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.655503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.655534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.655715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.655747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.655926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.655958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.656081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.656112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.656321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.656354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.656456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.656487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.656679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.656710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.656901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.656932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.657034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.657067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.657184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.657225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.657518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.657550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.657655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.657693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.657902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.657934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.658136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.658168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.658305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.658338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.658454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.658484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.658673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.658705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.658966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.658997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.659236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.659269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.659436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.659467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.659667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.659700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.659937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.659968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.660154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.660186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.660311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.660342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.660538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.660569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.660756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.660788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.660887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.660920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.661042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.661073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.661261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.661294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.661549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.661580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.661755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.661787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.661890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.661921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.662114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.662146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.662287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.662321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.662455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.662486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.662678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.662710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.662972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.663005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.189 qpair failed and we were unable to recover it. 00:30:27.189 [2024-11-19 10:58:16.663119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.189 [2024-11-19 10:58:16.663150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.663332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.663370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.663494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.663525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.663733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.663766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.663952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.663983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.664250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.664283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.664420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.664452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.664561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.664592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.664714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.664745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.664856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.664888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.665015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.665045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.665284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.665317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.665430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.665461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.665663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.665695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.665985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.666016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.666154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.666186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.666330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.666363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.666487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.666518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.666697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.666729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.666896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.666928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.667121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.667152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.667397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.667430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.667671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.667703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.667823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.667854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.667964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.667995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.668257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.668291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.668555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.668587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.668714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.668747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.668930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.668966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.669100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.669133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.669247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.669280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.669448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.669480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.190 qpair failed and we were unable to recover it. 00:30:27.190 [2024-11-19 10:58:16.669594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.190 [2024-11-19 10:58:16.669625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.669811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.669843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.670033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.670064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.670232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.670266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.670387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.670418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.670591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.670622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.670729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.670761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.670934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.670966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.671093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.671123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.671363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.671396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.671522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.671555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.671681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.671713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.671845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.671876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.672010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.672042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.672224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.672256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.672380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.672412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.672677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.672709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.672818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.672850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.673022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.673053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.673269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.673302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.673475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.673505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.673625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.673658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.673842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.673873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.673997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.674028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.674151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.674183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.674372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.674404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.674599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.674630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.674893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.674925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.675108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.675139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.675263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.675296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.675415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.675446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.675619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.675651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.675821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.675852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.676039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.676071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.676243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.676275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.191 [2024-11-19 10:58:16.676395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.191 [2024-11-19 10:58:16.676427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.191 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.676604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.676637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.676865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.676937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.677078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.677114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.677249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.677284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.677419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.677451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.677702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.677734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.677995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.678027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.678214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.678248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.678460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.678492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.678684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.678716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.678926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.678959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.679092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.679124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.679314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.679347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.679556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.679588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.679778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.679819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.680064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.680096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.680338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.680372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.680559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.680590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.680709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.680741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.680982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.681013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.681245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.681279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.681547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.681578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.681710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.681742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.681925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.681956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.682068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.682100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.682225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.682258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.682440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.682471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.682664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.682696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.682834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.682866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.683002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.683033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.683272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.683306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.683483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.683514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.683629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.683661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.683900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.683939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.684052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.684084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.684223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.684265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.192 [2024-11-19 10:58:16.684446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.192 [2024-11-19 10:58:16.684492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.192 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.684639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.684684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.684830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.684872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.685062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.685094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.685290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.685324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.685567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.685638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.685785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.685822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.686086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.686119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.686337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.686372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.686612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.686644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.686906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.686938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.687107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.687138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.687336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.687368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.687488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.687521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.687694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.687726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.687915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.687947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.688070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.688103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.688277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.688311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.688436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.688477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.688722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.688754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.688944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.688977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.689150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.689181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.689387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.689419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.689607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.689639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.689817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.689848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.689964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.689996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.690185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.690227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.690467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.690499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.690763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.690794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.690924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.690956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.691151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.691182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.691406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.691439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.691652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.691685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.691803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.691835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.692076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.692107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.692240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.692273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.692459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.692491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.692607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.193 [2024-11-19 10:58:16.692639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.193 qpair failed and we were unable to recover it. 00:30:27.193 [2024-11-19 10:58:16.692763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.692795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.693045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.693077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.693312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.693359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.693625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.693657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.693839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.693870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.693976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.694008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.694185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.694224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.694554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.694624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.694827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.694863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.695085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.695119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.695312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.695346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.695593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.695624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.695807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.695839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.696018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.696049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.696169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.696212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.696474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.696506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.696679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.696710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.696924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.696955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.697091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.697122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.697338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.697372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.697556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.697588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.697830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.697863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.698071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.698103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.698355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.698388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.698564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.698596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.698795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.698826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.699045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.699078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.699293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.699326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.699503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.699534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.699725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.699757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.699890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.699921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.700164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.700196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.700447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.700478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.700725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.700756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.194 [2024-11-19 10:58:16.700890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.194 [2024-11-19 10:58:16.700922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.194 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.701163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.701195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.701324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.701357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.701628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.701660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.701841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.701873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.702054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.702086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.702318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.702351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.702562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.702594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.702727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.702759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.702946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.702977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.703176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.703220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.703421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.703453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.703640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.703671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.703844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.703881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.704152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.704183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.704367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.704399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.704579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.704610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.704732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.704764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.704887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.704919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.705039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.705070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.705182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.705225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.705423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.705455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.705717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.705748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.705984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.706016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.706130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.706162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.706290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.706322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.706539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.706570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.706716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.706748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.706861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.706892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.707070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.707101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.707217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.707250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.707431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.707463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.195 [2024-11-19 10:58:16.707705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.195 [2024-11-19 10:58:16.707736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.195 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.707920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.707952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.708122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.708153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.708292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.708324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.708438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.708470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.708597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.708628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.708887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.708918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.709097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.709128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.709393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.709428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.709614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.709645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.709826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.709858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.709981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.710012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.710129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.710159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.710280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.710312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.710425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.710457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.710634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.710665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.710791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.710822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.710944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.710975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.711239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.711271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.711447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.711479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.711693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.711724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.711917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.711954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.712149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.712179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.712334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.712366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.712483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.712514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.712621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.712651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.712828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.712859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.712995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.713026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.713297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.713330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.713466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.713497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.713741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.713773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.713902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.713933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.714054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.714085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.714229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.714261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.714390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.714422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.714543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.714575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.714746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.714777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.714884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.714915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.196 qpair failed and we were unable to recover it. 00:30:27.196 [2024-11-19 10:58:16.715085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.196 [2024-11-19 10:58:16.715117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.715287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.715319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.715510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.715541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.715719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.715751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.715938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.715969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.716139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.716170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.716354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.716387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.716557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.716587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.716691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.716722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.716910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.716942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.717143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.717175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.717370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.717401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.717520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.717551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.717793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.717824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.717930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.717961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.718261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.718293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.718420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.718451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.718689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.718721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.718903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.718935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.719131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.719161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.719296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.719329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.719504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.719535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.719723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.719754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.719949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.719986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.720108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.720139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.720335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.720367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.720507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.720538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.720676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.720707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.720878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.720909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.721012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.721044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.721157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.721188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.721380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.721411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.721597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.721629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.721865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.721896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.722008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.722039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.722242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.722275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.722450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.197 [2024-11-19 10:58:16.722481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.197 qpair failed and we were unable to recover it. 00:30:27.197 [2024-11-19 10:58:16.722673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.722704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.722878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.722910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.723109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.723140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.723313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.723345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.723523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.723554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.723673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.723706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.723970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.724000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.724106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.724137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.724339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.724373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.724612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.724643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.724826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.724857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.725025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.725057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.725189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.725229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.725361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.725393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.725673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.725705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.725939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.725970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.726096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.726127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.726310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.726343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.726463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.726494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.726678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.726709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.726903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.726933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.727061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.727092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.727330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.727362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.727606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.727637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.727848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.727879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.728055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.728085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.728334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.728371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.728586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.728617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.728788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.728820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.728994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.729024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.729140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.729172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.198 [2024-11-19 10:58:16.729365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.198 [2024-11-19 10:58:16.729398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.198 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.729533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.729564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.729683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.729714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.729835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.729866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.730057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.730089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.730263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.730296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.730560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.730591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.730782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.730814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.730987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.731018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.731218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.731252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.731436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.731466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.731654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.731685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.731901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.731932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.732103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.732135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.732319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.732351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.732543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.732574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.732707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.732737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.732865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.732897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.733103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.733134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.733339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.733371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.733547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.733576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.733680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.733711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.733889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.733920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.734109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.734140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.734326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.734359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.734541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.734573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.734754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.734785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.734986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.735017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.735278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.735311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.735493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.735523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.735782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.735813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.735994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.736025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.736215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.736247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.736365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.736396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.736520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.736552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.736763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.736817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.736918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.736950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.737224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.199 [2024-11-19 10:58:16.737257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.199 qpair failed and we were unable to recover it. 00:30:27.199 [2024-11-19 10:58:16.737450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.737482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.737653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.737684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.737897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.737928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.738103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.738134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.738276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.738309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.738555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.738587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.738718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.738750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.738989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.739020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.739260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.739293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.739561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.739592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.739713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.739745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.739957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.739989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.740256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.740288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.740524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.740555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.740690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.740722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.740905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.740935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.741120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.741152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.741348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.741380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.741640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.741671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.741859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.741890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.742072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.742104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.742356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.742389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.742656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.742687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.742924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.742956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.743091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.743122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.743330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.743363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.743585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.743616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.743836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.743867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.744041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.744072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.200 [2024-11-19 10:58:16.744265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.200 [2024-11-19 10:58:16.744298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.200 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.744540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.744571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.744744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.744775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.744894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.744925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.745135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.745166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.745417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.745448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.745621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.745653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.745893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.745923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.746046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.746082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.746323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.746357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.746550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.746581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.746841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.746873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.747103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.747134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.747251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.747283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.747482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.747514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.747699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.747730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.747978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.748010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.748213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.748244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.748365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.748396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.748660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.748690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.748872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.748903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.749025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.749055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.749240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.749272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.749525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.749556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.749800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.749831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.749946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.749976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.750237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.750270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.750446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.750478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.750598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.750630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.750802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.750833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.750952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.750984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.751200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.751239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.751407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.751439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.751711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.751742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.751916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.751947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.752135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.752166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.201 [2024-11-19 10:58:16.752361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.201 [2024-11-19 10:58:16.752393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.201 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.752580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.752611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.752797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.752829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.753017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.753047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.753238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.753270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.753557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.753587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.753861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.753893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.754072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.754103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.754285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.754317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.754579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.754611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.754799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.754830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.755071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.755102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.755296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.755334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.755526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.755557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.755823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.755855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.756048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.756079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.756284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.756316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.756495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.756526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.756794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.756826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.756958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.756989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.757115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.757147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.757257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.757288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.757553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.757584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.757771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.757802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.758042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.758074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.758192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.758249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.758505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.758537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.758722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.758754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.759033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.759064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.759247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.759280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.759547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.759578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.759749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.759781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.759885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.759918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.760101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.760132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.760395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.760428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.760597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.760628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.760801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.760832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.760955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.760987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.202 qpair failed and we were unable to recover it. 00:30:27.202 [2024-11-19 10:58:16.761117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.202 [2024-11-19 10:58:16.761148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.761330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.761363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.761488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.761519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.761647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.761678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.761874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.761906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.762115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.762146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.762336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.762368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.762549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.762579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.762757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.762789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.762979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.763011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.763190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.763233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.763444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.763475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.763579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.763611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.763896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.763926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.764104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.764141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.764414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.764446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.764691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.764723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.764914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.764945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.765056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.765088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.765278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.765311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.765449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.765479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.765761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.765792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.765979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.766010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.766275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.766308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.766430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.766461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.766576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.766608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.766849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.766880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.767136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.767168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.767313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.767346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.767603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.767635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.767883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.767914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.768172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.768212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.768477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.768508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.768695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.768727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.768846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.768876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.769144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.769175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.769376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.769407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.769575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.203 [2024-11-19 10:58:16.769607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.203 qpair failed and we were unable to recover it. 00:30:27.203 [2024-11-19 10:58:16.769805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.769838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.770021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.770053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.770248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.770281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.770528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.770561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.770683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.770715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.770894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.770925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.771096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.771127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.771393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.771426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.771623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.771654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.771780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.771812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.772002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.772036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.772220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.772252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.772517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.772549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.772808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.772840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.772950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.772980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.773239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.773271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.773479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.773516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.773708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.773739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.773925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.773956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.774192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.774247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.774425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.774457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.774629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.774660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.774838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.774869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.775057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.775089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.775257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.775289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.775484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.775515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.775710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.775741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.775921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.775951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.776164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.776195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.776385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.776415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.776524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.776555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.776817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.776848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.777030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.777062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.777300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.777333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.777576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.777607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.777784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.777815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.204 qpair failed and we were unable to recover it. 00:30:27.204 [2024-11-19 10:58:16.777933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.204 [2024-11-19 10:58:16.777963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.778126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.778158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.778390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.778424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.778615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.778649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.778848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.778880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.779052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.779084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.779219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.779251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.779445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.779477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.779593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.779624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.779864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.779896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.780083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.780114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.780254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.780286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.780467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.780498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.780686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.780717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.780896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.780927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.781129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.781160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.781478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.781509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.781683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.781715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.781930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.781960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.782136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.782170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.782294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.782330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.782447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.782478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.782650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.782681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.782920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.782951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.783084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.783115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.783309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.783343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.783548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.783579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.783762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.783793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.783991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.784022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.784259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.784294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.784399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.784430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.205 [2024-11-19 10:58:16.784575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.205 [2024-11-19 10:58:16.784608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.205 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.784822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.784853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.784976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.785007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.785224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.785257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.785490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.785520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.785655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.785687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.785929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.785960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.786135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.786166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.786382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.786414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.786554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.786585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.786856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.786886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.786994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.787025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.787137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.787168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.787309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.787340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.787595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.787626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.787742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.787774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.787914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.787945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.788159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.788190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.788405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.788437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.788612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.788643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.788859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.788890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.789075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.789107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.789229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.789261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.789449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.789480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.789619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.789651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.789853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.789884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.790121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.790152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.790335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.790368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.790571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.790603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.790790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.790827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.790962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.790995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.791108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.791140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.791256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.791290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.791471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.791502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.791765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.791797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.206 [2024-11-19 10:58:16.791977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.206 [2024-11-19 10:58:16.792008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.206 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.792128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.792160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.792375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.792409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.792531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.792563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.792668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.792699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.792872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.792903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.793024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.793056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.793243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.793277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.793481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.793513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.793621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.793652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.793830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.793862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.794048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.794079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.794282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.794315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.794590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.794621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.794833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.794864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.795103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.795134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.795322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.795355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.795471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.795502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.795607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.795639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.795755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.795786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.796027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.796059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.796302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.796374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.796547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.796614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.796827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.796863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.797048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.797080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.797289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.797323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.797518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.797552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.797791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.797823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.797965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.797996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.207 [2024-11-19 10:58:16.798182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.207 [2024-11-19 10:58:16.798222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.207 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.798400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.798433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.798558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.798590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.798694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.798726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.798831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.798864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.799128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.799168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.799354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.799387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.799498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.799530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.799633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.799665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.799837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.799869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.800004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.800036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.800149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.800180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.800474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.800507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.800700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.800732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.800887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.800919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.801159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.801192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.801385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.801418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.801542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.801574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.801812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.801844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.801972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.802004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.802186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.802228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.802419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.802451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.802742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.802773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.802888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.802920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.803101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.803133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.803313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.803346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.803514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.803546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.803784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.803816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.803947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.803978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.804155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.804187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.804377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.804410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.804654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.804686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.804874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.804906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.805085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.805117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.805251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.805284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.805400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.805432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.805621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.805653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.208 [2024-11-19 10:58:16.805786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.208 [2024-11-19 10:58:16.805818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.208 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.806010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.806042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.806152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.806185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.806433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.806464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.806741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.806772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.806905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.806937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.807124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.807155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.807382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.807415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.807660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.807693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.807891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.807924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.808125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.808157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.808423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.808455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.808624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.808656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.808839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.808870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.809129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.809161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.809329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.809363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.809489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.809521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.809640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.809672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.809852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.809884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.810011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.810042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.810284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.810318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.810504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.810536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.810725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.810757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.810925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.810957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.811249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.811283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.811402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.811433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.811674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.811705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.811826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.811858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.812040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.812071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.812261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.812295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.812419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.812452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.812631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.812663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.812766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.812797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.813059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.813091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.813265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.813299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.813425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.813464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.209 qpair failed and we were unable to recover it. 00:30:27.209 [2024-11-19 10:58:16.813664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.209 [2024-11-19 10:58:16.813697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.813898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.813929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.814061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.814093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.814279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.814313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.814488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.814520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.814692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.814723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.814856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.814888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.815086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.815118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.815301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.815334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.815520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.815553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.815662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.815694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.815870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.815902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.816079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.816110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.816287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.816320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.816582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.816614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.816822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.816854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.816991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.817023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.817150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.817181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.817364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.817396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.817582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.817614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.817737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.817768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.817953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.817985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.818238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.818272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.818456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.818488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.818659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.818691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.818867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.818899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.819094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.819126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.819332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.819365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.819469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.819501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.819612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.819644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.819897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.819929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.210 qpair failed and we were unable to recover it. 00:30:27.210 [2024-11-19 10:58:16.820095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.210 [2024-11-19 10:58:16.820127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.820301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.820335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.820523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.820556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.820728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.820760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.820889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.820920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.821040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.821072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.821277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.821310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.821487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.821519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.821763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.821801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.821986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.822019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.822245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.822279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.822451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.822483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.822615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.822647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.822887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.822919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.823156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.823188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.823322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.823354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.823595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.823627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.823804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.823836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.824104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.824137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.824397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.824431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.824635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.824667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.824863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.824896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.825015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.825049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.825255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.825289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.825461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.825493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.825669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.825700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.826006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.826039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.826158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.826190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.826388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.826420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.826606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.826639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.826821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.826853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.827057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.827088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.827274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.827308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.827507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.827538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.827789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.827821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.828016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.211 [2024-11-19 10:58:16.828048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.211 qpair failed and we were unable to recover it. 00:30:27.211 [2024-11-19 10:58:16.828236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.828269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.828507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.828539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.828670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.828703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.828806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.828836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.828964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.828994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.829119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.829150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.829333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.829364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.829489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.829520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.829712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.829744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.829933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.829965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.830149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.830181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.830299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.830332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.830444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.830481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.830593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.830625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.830795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.830827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.830951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.830983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.831158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.831189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.831385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.831417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.831607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.831639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.831755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.831787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.831979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.832010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.832193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.832256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.832433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.832465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.832592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.832624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.832744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.832777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.832998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.833030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.833253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.833287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.833543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.833575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.833765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.833797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.834011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.834043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.834282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.834315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.834513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.834546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.834680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.834711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.834844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.834876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.835005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.835037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.835158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.212 [2024-11-19 10:58:16.835189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.212 qpair failed and we were unable to recover it. 00:30:27.212 [2024-11-19 10:58:16.835368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.835400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.835644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.835676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.835813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.835845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.836023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.836056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.836255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.836288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.836480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.836512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.836633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.836665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.836851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.836883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.837061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.837093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.837264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.837298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.837470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.837502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.837608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.837640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.837772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.837803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.837990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.838023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.838220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.838254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.838367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.838398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.838584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.838622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.838824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.838856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.839112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.839144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.839275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.839307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.839562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.839594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.839713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.839744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.839993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.840025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.840266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.840300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.840546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.840578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.840774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.840807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.841049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.841081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.841317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.841350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.841524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.841555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.841818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.841849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.842116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.842148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.842392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.842425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.842596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.842628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.842839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.842871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.843133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.843166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.843357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.843389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.843635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.213 [2024-11-19 10:58:16.843667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.213 qpair failed and we were unable to recover it. 00:30:27.213 [2024-11-19 10:58:16.843802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.843835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.843961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.843993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.844099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.844131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.844397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.844431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.844619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.844651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.844820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.844852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.844976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.845009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.845187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.845230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.845449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.845482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.845723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.845754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.845892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.845924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.846194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.846235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.846358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.846391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.846514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.846546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.846717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.846750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.846976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.847009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.847125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.847157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.847350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.847382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.847637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.847669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.847780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.847817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.848004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.848036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.848292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.848326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.848531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.848563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.848687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.848718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.848985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.849017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.849187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.849227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.849349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.849382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.849522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.849554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.849727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.849759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.849954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.849986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.850195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.214 [2024-11-19 10:58:16.850235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.214 qpair failed and we were unable to recover it. 00:30:27.214 [2024-11-19 10:58:16.850340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.850372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.850562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.850594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.850837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.850869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.851056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.851088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.851308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.851341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.851477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.851508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.851688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.851719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.851849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.851882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.852124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.852156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.852359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.852393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.852499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.852531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.852728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.852760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.852948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.852980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.853167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.853199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.853421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.853454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.853719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.853752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.853896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.853928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.854113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.854145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.854331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.854364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.854534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.854566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.854741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.854773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.854952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.854985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.855241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.855274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.855473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.855505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.855747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.855780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.856015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.856047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.856181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.856231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.856359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.856391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.856566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.856603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.856793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.215 [2024-11-19 10:58:16.856832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.215 qpair failed and we were unable to recover it. 00:30:27.215 [2024-11-19 10:58:16.857092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.857124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.857240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.857274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.857479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.857510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.857683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.857715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.857840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.857872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.858066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.858098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.858226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.858259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.858385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.858417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.858659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.858690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.858818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.858850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.859116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.859148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.859273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.859306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.859505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.859537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.859802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.859833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.860098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.860130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.860233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.860267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.860513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.860545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.860784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.860816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.861019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.861051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.861241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.861274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.861473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.861505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.861750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.861782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.861901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.861933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.862140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.862173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.862468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.862538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.862736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.862806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.863006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.863042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.863169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.863217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.863460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.863492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.863698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.863731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.863857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.863889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.864130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.864161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.864359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.864392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.864584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.864616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.864733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.864765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.864960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.864992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.865260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.216 [2024-11-19 10:58:16.865293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.216 qpair failed and we were unable to recover it. 00:30:27.216 [2024-11-19 10:58:16.865400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.865431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.865684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.865726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.865896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.865928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.866111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.866143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.866392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.866425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.866535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.866566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.866768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.866799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.866989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.867020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.867225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.867258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.867441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.867473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.867659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.867692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.867806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.867838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.868051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.868084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.868265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.868298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.868491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.868523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.868647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.868679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.868948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.868980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.869092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.869123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.869368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.869401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.869595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.869627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.869741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.869773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.870036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.870068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.870324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.870357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.870573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.870605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.870793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.870825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.870969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.871001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.871183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.871223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.871505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.871537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.871844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.871914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.872173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.872217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.872422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.872455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.872710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.872742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.872869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.872901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.873188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.873244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.873446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.873478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.873719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.873750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.873870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.873902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.874096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.217 [2024-11-19 10:58:16.874127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.217 qpair failed and we were unable to recover it. 00:30:27.217 [2024-11-19 10:58:16.874391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.874425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.874611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.874642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.874854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.874885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.875087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.875124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.875417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.875451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.875657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.875688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.875865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.875898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.876037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.876068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.876253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.876286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.876400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.876431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.876548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.876580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.876765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.876796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.877007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.877040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.877237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.877270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.877507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.877539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.877710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.877742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.878001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.878033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.878222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.878255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.878546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.878578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.878855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.878887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.879147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.879179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.879476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.879509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.879708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.879739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.879913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.879945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.880158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.880189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.880375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.880407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.880590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.880622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.880860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.880892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.881022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.881052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.881237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.881270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.881403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.881436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.881698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.881729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.881920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.881951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.882126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.882159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.882298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.882330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.882515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.882547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.218 [2024-11-19 10:58:16.882734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.218 [2024-11-19 10:58:16.882765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.218 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.882872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.882904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.883161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.883192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.883388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.883421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.883595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.883626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.883739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.883771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.884038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.884071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.884194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.884252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.884434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.884466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.884658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.884690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.884891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.884922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.885045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.885077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.885222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.885258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.885425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.885455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.885694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.885726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.886006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.886037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.886222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.886255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.886579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.886610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.886732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.886765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.886888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.886919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.887165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.887197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.887451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.887483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.887724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.887756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.887977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.888009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.888197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.888240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.888359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.888389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.888532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.888564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.888694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.888725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.888923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.888955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.889073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.889104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.889224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.889257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.889526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.889558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.219 [2024-11-19 10:58:16.889763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.219 [2024-11-19 10:58:16.889794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.219 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.889902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.889934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.890148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.890190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.890335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.890368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.890468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.890500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.890613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.890646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.890891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.890923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.891047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.891079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.891339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.891372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.891529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.891560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.891749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.891781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.891966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.891997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.892189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.892233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.892371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.892402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.892528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.892560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.892754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.892791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.892986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.893018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.893267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.893300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.893545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.893576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.893687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.893719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.893847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.893878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.894064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.894095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.894274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.894307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.894414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.894445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.894637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.894669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.894886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.894917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.895039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.895070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.895251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.895284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.895464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.895495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.895715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.895747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.895848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.895879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.896066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.896098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.896271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.896304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.896544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.896576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.896693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.896725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.896936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.896967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.897153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.897184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.897446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.220 [2024-11-19 10:58:16.897478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.220 qpair failed and we were unable to recover it. 00:30:27.220 [2024-11-19 10:58:16.897603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.897634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.897902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.897933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.898061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.898093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.898356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.898388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.898569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.898601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.898788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.898820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.899029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.899059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.899172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.899215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.899403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.899434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.899702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.899735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.899847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.899879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.900093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.900124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.900339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.900372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.900542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.900572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.900696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.900727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.900909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.900939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.901062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.901094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.901284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.901338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.901555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.901587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.901850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.901882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.902101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.902132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.902394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.902428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.902610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.902642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.902822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.902853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.903057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.903089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.903306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.903339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.903526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.903557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.903676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.903708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.903901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.903932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.904103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.904135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.904403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.904436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.904632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.904664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.904839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.904870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.904993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.905025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.905197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.905239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.905438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.905470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.905725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.905756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.905947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.221 [2024-11-19 10:58:16.905978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.221 qpair failed and we were unable to recover it. 00:30:27.221 [2024-11-19 10:58:16.906163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.906194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.906489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.906521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.906690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.906721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.906833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.906865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.907058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.907090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.907330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.907362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.907537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.907569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.907703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.907735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.907932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.907962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.908151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.908183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.908328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.908359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.908543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.908574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.908813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.908844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.909082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.909114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.909284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.909317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.909420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.909451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.909694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.909726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.909872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.909903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.910094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.910125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.910309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.910347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.910541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.910572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.910755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.910787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.911077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.911108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.911230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.911262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.911525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.911557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.911737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.911767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.911957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.911989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.912182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.912221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.912450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.912482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.912676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.912707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.912896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.912927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.913143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.913174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.913453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.913485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.913755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.913787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.913971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.914003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.914186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.914228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.914431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.222 [2024-11-19 10:58:16.914463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.222 qpair failed and we were unable to recover it. 00:30:27.222 [2024-11-19 10:58:16.914636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.914667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.914874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.914907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.915044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.915074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.915288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.915321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.915559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.915591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.915810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.915841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.916114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.916145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.916446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.916478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.916725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.916756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.916972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.917009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.917147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.917178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.917305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.917337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.917459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.917491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.917676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.917707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.917894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.917925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.918115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.918147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.918331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.918367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.918553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.918585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.918705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.918736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.918868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.918900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.919081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.919112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.919374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.919408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.919610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.919640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.919775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.919807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.919994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.920025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.920156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.920188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.920385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.920417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.920607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.920638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.920823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.920855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.921062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.921094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.921288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.921321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.921536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.921568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.921785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.921817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.922017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.922048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.922299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.922332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.922518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.922549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.922754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.922786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.923050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.223 [2024-11-19 10:58:16.923080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.223 qpair failed and we were unable to recover it. 00:30:27.223 [2024-11-19 10:58:16.923258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.923291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.923428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.923459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.923651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.923682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.923871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.923902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.924148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.924179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.924353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.924384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.924576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.924608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.924848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.924879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.925056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.925087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.925264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.925297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.925552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.925584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.925774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.925810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.926009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.926040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.926248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.926281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.926529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.926560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.926811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.926842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.927098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.927130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.927300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.927333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.927546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.927577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.927767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.927798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.927905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.927935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.928125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.928157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.928438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.928471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.928708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.928740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.928939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.928970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.929146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.929178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.929362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.929393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.929633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.929665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.929797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.929828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.930003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.224 [2024-11-19 10:58:16.930034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.224 qpair failed and we were unable to recover it. 00:30:27.224 [2024-11-19 10:58:16.930234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.930267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.930458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.930489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.930673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.930704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.930834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.930865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.930998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.931029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.931276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.931309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.931573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.931604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.931741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.931772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.931898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.931929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.932123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.932154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.932291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.932323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.932431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.932462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.932730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.932761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.932937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.932968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.933161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.933191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.933392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.933425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.933564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.933594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.933717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.933748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.933930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.933961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.934225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.934258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.934541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.934572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.934771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.934809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.935074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.935104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.935296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.935328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.935570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.935601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.935794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.935825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.935951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.935982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.936169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.936209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.936416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.936447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.936631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.936663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.936855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.936886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.937065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.937096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.937272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.937305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.937502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.937533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.937647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.937679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.937951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.937983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.938169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.938200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.225 [2024-11-19 10:58:16.938398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.225 [2024-11-19 10:58:16.938429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.225 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.938666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.938698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.938872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.938902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.939148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.939180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.939317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.939349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.939538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.939570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.939761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.939792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.939924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.939956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.940197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.940253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.940454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.940485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.940727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.940758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.940897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.940929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.941129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.941160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.941346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.941384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.941556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.941587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.941688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.941719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.226 [2024-11-19 10:58:16.941923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.226 [2024-11-19 10:58:16.941954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.226 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.942245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.942279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.942570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.942601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.942839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.942873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.943068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.943109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.943285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.943316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.943592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.943624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.943755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.943787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.944024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.944061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.944327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.944359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.944625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.944656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.944914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.944948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.945214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.945249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.945440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.945472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.945654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.555 [2024-11-19 10:58:16.945686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.555 qpair failed and we were unable to recover it. 00:30:27.555 [2024-11-19 10:58:16.945816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.945847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.945975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.946007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.946142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.946174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.946438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.946507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.946663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.946699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.946888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.946920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.947096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.947140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.947385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.947420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.947630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.947661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.947915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.947947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.948225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.948263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.948467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.948498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.948703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.948733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.948996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.949028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.949241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.949274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.949391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.949428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.949688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.949726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.949945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.949981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.950194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.950247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.950457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.950494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.950705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.950744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.950991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.951031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.951228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.951270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.951457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.951496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.951660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.951701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.951907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.951948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.952150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.952186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.952324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.952366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.952489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.952527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.952785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.952820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.952953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.952994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.953194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.953258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.953485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.953521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.953718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.953762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.953991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.954032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.954233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.954270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.954487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.954522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.556 [2024-11-19 10:58:16.954730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.556 [2024-11-19 10:58:16.954766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.556 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.954973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.955010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.955236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.955290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.955517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.955564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.955736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.955783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.955929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.955977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.956110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.956156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.956454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.956503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.956661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.956707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.956977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.957021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.957243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.957292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.957444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.957491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.957695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.957741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.957956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.958002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.958135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.958181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.958370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.958417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.958711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.958755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.959024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.959070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.959282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.959329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.959545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.959592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.959737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.959784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.960078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.960124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.960347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.960395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.960626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.960673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.960963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.961010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.961250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.961299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.961607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.961653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.961857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.961902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.962118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.962165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.962393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.962440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.962660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.962707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.962911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.962956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.963159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.963221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.963442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.963489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.963757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.557 [2024-11-19 10:58:16.963803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.557 qpair failed and we were unable to recover it. 00:30:27.557 [2024-11-19 10:58:16.964095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.964141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.964387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.964444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.964762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.964809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.965083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.965130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.965381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.965429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.965655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.965702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.965936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.965982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.966197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.966259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.966534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.966579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.966804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.966851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.967006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.967052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.967295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.967343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.967500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.967547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.967815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.967861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.968089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.968135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.968395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.968444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.968668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.968715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.969038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.969068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.969244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.969275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.969510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.969540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.969778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.969808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.969935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.969965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.970254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.970286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.970466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.970495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.970599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.970629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.970810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.970840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.971122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.971153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.971277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.971309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.971504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.971533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.971738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.971769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.972030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.972060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.972267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.972302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.972424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.972446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.972593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.558 [2024-11-19 10:58:16.972613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.558 qpair failed and we were unable to recover it. 00:30:27.558 [2024-11-19 10:58:16.972767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.972787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.972933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.972953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.973129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.973149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.973391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.973413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.973528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.973548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.973705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.973726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.973937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.973956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.974130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.974155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.974341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.974363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.974518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.974539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.974643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.974663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.974747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.974767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.974943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.974963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.975067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.975088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.975189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.975218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.975381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.975401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.975509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.975529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.975791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.975811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.975974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.975995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.976146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.976166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.976273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.976295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.976532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.976553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.976664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.976684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.976932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.976952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.977032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.977052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.977135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.977154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.977312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.977333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.977430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.977448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.977674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.977695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.977808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.977828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.977987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.978008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.978118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.978139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.978241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.978262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.978419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.978440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.978655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.978676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.978778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.978799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.978881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.978900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.559 qpair failed and we were unable to recover it. 00:30:27.559 [2024-11-19 10:58:16.979019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.559 [2024-11-19 10:58:16.979040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.979136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.979156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.979327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.979348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.979514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.979538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.979656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.979680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.979846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.979871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.979959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.979983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.980141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.980165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.980343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.980369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.980547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.980571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.980685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.980717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.980822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.980847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.981091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.981115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.981285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.981312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.981405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.981430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.981615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.981640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.981815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.981840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.982011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.982036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.982225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.982259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.982396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.982427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.982613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.982643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.982764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.982796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.982978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.983009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.983266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.983300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.983565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.983591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.983688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.983712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.983907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.983931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.984092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.984117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.984295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.984322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.984506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.984531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.560 [2024-11-19 10:58:16.984695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.560 [2024-11-19 10:58:16.984720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.560 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.984967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.984999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.985246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.985280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.985533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.985564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.985778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.985809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.985944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.985976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.986115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.986147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.986410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.986443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.986634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.986666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.986858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.986890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.986996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.987027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.987249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.987282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.987550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.987593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.987843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.987868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.988098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.988122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.988288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.988313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.988515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.988540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.988736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.988760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.988987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.989012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.989183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.989252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.989383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.989422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.989606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.989637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.989846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.989876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.990087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.990119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.990306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.990340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.990520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.990552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.990747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.990779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.990953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.990984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.991116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.991148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.991293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.991326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.991563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.991595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.991724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.991756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.991978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.992009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.992309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.992343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.992527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.992558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.992731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.992763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.993055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.993087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.993283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.993316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.561 qpair failed and we were unable to recover it. 00:30:27.561 [2024-11-19 10:58:16.993508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.561 [2024-11-19 10:58:16.993540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.993801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.993833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.994074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.994105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.994324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.994358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.994627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.994658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.994923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.994955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.995135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.995166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.995378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.995412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.995605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.995637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.995952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.996023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.996322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.996361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.996584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.996617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.996758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.996790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.997054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.997086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.997325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.997359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.997603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.997634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.997809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.997840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.997962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.997993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.998179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.998220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.998430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.998462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.998652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.998684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.998869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.998901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.999071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.999102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.999234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.999268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.999509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.999542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.999665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.999696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:16.999810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:16.999842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:17.000052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:17.000084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:17.000276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:17.000310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:17.000533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:17.000564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:17.000756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:17.000788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:17.000912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:17.000943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:17.001069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:17.001101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:17.001351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:17.001386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:17.001490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:17.001522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:17.001713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:17.001745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:17.001859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:17.001897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.562 [2024-11-19 10:58:17.002006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.562 [2024-11-19 10:58:17.002038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.562 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.002235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.002268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.002454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.002485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.002676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.002708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.002917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.002949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.003141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.003173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.003298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.003330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.003517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.003549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.003815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.003847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.004040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.004071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.004314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.004347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.004632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.004663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.004854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.004886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.005163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.005195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.005375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.005407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.005593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.005625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.005863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.005895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.006131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.006162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.006311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.006345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.006581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.006613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.006796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.006827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.007087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.007119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.007302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.007336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.007508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.007539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.007711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.007742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.007914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.007946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.008069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.008107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.008228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.008261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.008455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.008487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.008616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.008648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.008771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.008803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.008933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.008965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.009227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.009261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.009466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.009499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.009619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.009651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.009838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.009870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.010156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.010188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.010408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.010441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.563 [2024-11-19 10:58:17.010582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.563 [2024-11-19 10:58:17.010614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.563 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.010784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.010816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.011079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.011111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.011376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.011409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.011594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.011626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.011752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.011783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.012027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.012059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.012245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.012279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.012542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.012574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.012689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.012722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.012851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.012882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.013055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.013086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.013267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.013301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.013499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.013530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.013664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.013697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.013815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.013853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.013979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.014011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.014197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.014238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.014504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.014536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.014777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.014808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.015011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.015042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.015220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.015254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.015443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.015474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.015678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.015710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.015919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.015951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.016166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.016198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.016453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.016485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.016657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.016688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.016826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.016859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.017127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.017160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.017446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.017479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.017681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.017713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.017935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.017967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.018154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.018186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.018339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.018370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.018537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.018568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.018753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.018824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.019034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.019070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.564 qpair failed and we were unable to recover it. 00:30:27.564 [2024-11-19 10:58:17.019258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.564 [2024-11-19 10:58:17.019294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.019548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.019582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.019799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.019831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.020042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.020074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.020335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.020378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.020592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.020624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.020890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.020922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.021044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.021076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.021193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.021234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.021420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.021452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.021695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.021726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.021838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.021871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.022068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.022101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.022318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.022350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.022637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.022669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.022803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.022835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.023017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.023049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.023331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.023365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.023492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.023525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.023647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.023679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.023872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.023904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.024173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.024212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.024345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.024377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.024499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.024531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.024651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.024684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.024802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.024834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.025007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.025039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.025303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.025337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.025508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.025541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.025666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.025698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.025828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.565 [2024-11-19 10:58:17.025860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-19 10:58:17.025975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.026010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.026148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.026179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.026398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.026431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.026644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.026675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.026869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.026901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.027149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.027182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.027330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.027362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.027491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.027523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.027703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.027735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.027997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.028029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.028292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.028326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.028442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.028474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.028607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.028639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.028774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.028806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.029005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.029037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.029154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.029186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.029392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.029424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.029595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.029625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.029817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.029849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.030037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.030069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.030182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.030221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.030401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.030434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.030616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.030649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.030836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.030867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.031111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.031143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.031345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.031377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.031551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.031583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.031779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.031818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.032072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.032105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.032352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.032385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.032588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.032619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.032818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.032851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.033048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.033081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.033267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.033300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.033512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.033544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.033666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.033698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.033804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.033834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-19 10:58:17.034075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.566 [2024-11-19 10:58:17.034107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.034291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.034324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.034459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.034491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.034755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.034786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.034977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.035009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.035143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.035175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.035353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.035385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.035500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.035531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.035735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.035767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.036008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.036039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.036178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.036220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.036412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.036444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.036684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.036716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.036981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.037013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.037250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.037285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.037421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.037452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.037635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.037667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.037928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.037965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.038147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.038179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.038427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.038459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.038660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.038692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.038865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.038897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.039169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.039210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.039399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.039430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.039543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.039574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.039691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.039723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.039894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.039926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.040164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.040195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.040396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.040428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.040543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.040575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.040758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.040789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.040977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.041008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.041190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.041235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.041416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.041447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.041626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.041659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.041844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.041875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.042052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.567 [2024-11-19 10:58:17.042084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-19 10:58:17.042350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.042384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.042500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.042531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.042670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.042702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.042837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.042869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.043131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.043162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.043365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.043398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.043570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.043601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.043714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.043746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.043922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.043955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.044127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.044159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.044457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.044490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.044668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.044698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.044879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.044911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.045141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.045172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.045400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.045433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.045554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.045585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.045826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.045858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.046045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.046076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.046224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.046257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.046441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.046473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.046656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.046688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.046891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.046923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.047115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.047148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.047274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.047307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.047513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.047545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.047724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.047756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.048017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.048048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.048157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.048188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.048303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.048335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.048528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.048560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.048756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.048787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.048995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.049028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.049225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.049258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.049445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.049476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.049666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.049697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.049815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.049847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.050027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.050059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.050321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.568 [2024-11-19 10:58:17.050354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.568 qpair failed and we were unable to recover it. 00:30:27.568 [2024-11-19 10:58:17.050527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.050558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.050737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.050769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.051015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.051045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.051285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.051318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.051527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.051558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.051823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.051855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.052039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.052071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.052311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.052343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.052528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.052559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.052771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.052804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.053042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.053079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.053284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.053319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.053506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.053537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.053742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.053774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.053994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.054025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.054232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.054265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.054442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.054473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.054718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.054751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.054871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.054902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.055072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.055104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.055346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.055379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.055564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.055596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.055773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.055804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.055981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.056012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.056216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.056248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.056439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.056470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.056712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.056743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.056878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.056909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.057012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.057044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.057252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.057285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.569 [2024-11-19 10:58:17.057531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.569 [2024-11-19 10:58:17.057564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.569 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.057744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.057775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.058018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.058050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.058227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.058261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.058457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.058487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.058750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.058783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.058989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.059021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.059307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.059344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.059606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.059637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.059810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.059842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.060085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.060116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.060360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.060393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.060513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.060545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.060717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.060748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.060986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.061017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.061253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.061286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.061499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.061531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.061726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.061758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.061875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.061907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.062087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.062118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.062385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.062418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.062635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.062667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.062789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.062821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.063010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.063041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.063221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.063253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.063464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.063497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.063765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.063797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.063902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.063934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.064046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.064077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.064187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.064229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.064492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.064525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.064707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.064738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.064853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.064885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.065058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.065090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.065274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.065313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.065575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.065608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.065859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.065889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.066092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.066124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.066293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.570 [2024-11-19 10:58:17.066327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.570 qpair failed and we were unable to recover it. 00:30:27.570 [2024-11-19 10:58:17.066505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.066536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.066670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.066702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.066893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.066924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.067138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.067170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.067381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.067413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.067518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.067550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.067663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.067694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.067957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.067989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.068161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.068193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.068484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.068517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.068689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.068720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.068968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.069000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.069221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.069254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.069442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.069474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.069724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.069755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.069960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.069993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.070229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.070263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.070479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.070510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.070629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.070662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.070873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.070905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.071010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.071042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.071221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.071254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.071516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.071548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.071771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.071803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.072067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.072100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.072285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.072319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.072505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.072537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.072740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.072771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.073013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.073045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.073244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.073277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.073526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.073557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.073751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.073782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.073903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.073934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.074041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.074073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.074365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.074398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.074660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.074692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.074831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.074863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.074977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.571 [2024-11-19 10:58:17.075007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.571 qpair failed and we were unable to recover it. 00:30:27.571 [2024-11-19 10:58:17.075254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.075287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.075420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.075452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.075711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.075743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.075868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.075899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.076103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.076135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.076313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.076352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.076481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.076513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.076696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.076727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.076994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.077026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.077148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.077179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.077322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.077354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.077460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.077490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.077605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.077637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.077875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.077906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.078095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.078127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.078321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.078355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.078467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.078499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.078722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.078754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.078967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.078999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.079166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.079198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.079403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.079434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.079626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.079658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.079951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.079982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.080172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.080209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.080383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.080415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.080629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.080667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.080796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.080827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.080996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.081029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.081147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.081177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.081462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.081496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.081685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.081717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.081853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.081885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.082059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.082090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.082331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.082365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.082556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.082587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.082848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.082880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.083069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.083100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.083237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.572 [2024-11-19 10:58:17.083269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.572 qpair failed and we were unable to recover it. 00:30:27.572 [2024-11-19 10:58:17.083450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.083482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.083681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.083713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.083949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.083981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.084219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.084252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.084503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.084534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.084741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.084773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.085009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.085040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.085232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.085265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.085463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.085495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.085743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.085774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.085983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.086014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.086127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.086158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.086363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.086396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.086505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.086536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.086787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.086824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.087009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.087041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.087307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.087340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.087549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.087581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.087771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.087803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.088001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.088033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.088149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.088181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.088333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.088366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.088559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.088591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.088788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.088819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.089078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.089110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.089228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.089260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.089550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.089582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.089698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.089729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.089876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.089908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.090081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.090113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.090388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.090420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.090551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.090582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.090719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.090751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.090873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.090904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.091167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.091199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.091340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.091372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.091481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.091513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.091698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.091729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.573 qpair failed and we were unable to recover it. 00:30:27.573 [2024-11-19 10:58:17.091940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.573 [2024-11-19 10:58:17.091972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.092159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.092190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.092444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.092476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.092583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.092615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.092754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.092786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.092908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.092939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.093135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.093169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.093391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.093424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.093545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.093577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.093771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.093802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.094043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.094076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.094249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.094282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.094419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.094450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.094632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.094664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.094851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.094882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.095064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.095096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.095200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.095241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.095426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.095496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.095641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.095678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.095802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.095834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.096029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.096062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.096325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.096359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.096469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.096502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.096741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.096773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.096916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.096949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.097124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.097156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.097352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.097385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.097579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.097610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.097784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.097815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.098054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.574 [2024-11-19 10:58:17.098084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.574 qpair failed and we were unable to recover it. 00:30:27.574 [2024-11-19 10:58:17.098293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.098336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.098579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.098610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.098816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.098848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.099031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.099064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.099180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.099221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.099486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.099519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.099765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.099796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.099988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.100019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.100152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.100185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.100406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.100438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.100722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.100754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.100883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.100915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.101031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.101063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.101256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.101290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.101507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.101539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.101657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.101689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.101825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.101858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.102116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.102147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.102345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.102379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.102638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.102668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.102856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.102888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.103083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.103115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.103231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.103264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.103438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.103471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.103675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.103706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.103881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.103912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.104101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.104132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.104377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.104448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.104634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.104702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.104855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.104891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.105017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.105050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.105335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.105368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.105475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.105506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.105767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.105799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.106038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.106070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.106269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.106302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.106492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.106523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.575 [2024-11-19 10:58:17.106762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.575 [2024-11-19 10:58:17.106792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.575 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.107004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.107036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.107229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.107262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.107439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.107470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.107762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.107793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.107927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.107960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.108226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.108259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.108392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.108425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.108601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.108634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.108874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.108905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.109153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.109185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.109450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.109482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.109656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.109687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.109959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.109991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.110125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.110158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.110294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.110326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.110454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.110486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.110658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.110695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.110880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.110912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.111104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.111135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.111380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.111413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.111552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.111583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.111700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.111732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.111997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.112029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.112159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.112191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.112393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.112425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.112633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.112666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.112791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.112822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.112939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.112971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.113184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.113224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.113404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.113436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.113554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.113586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.113850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.113882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.114009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.114041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.114166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.114198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.114402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.114435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.114608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.114639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.114765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.114796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.114921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.576 [2024-11-19 10:58:17.114952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.576 qpair failed and we were unable to recover it. 00:30:27.576 [2024-11-19 10:58:17.115078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.115110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.115280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.115314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.115437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.115468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.115648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.115680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.115921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.115953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.116059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.116097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.116220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.116253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.116520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.116552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.116689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.116720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.116916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.116948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.117151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.117181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.117332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.117365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.117548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.117580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.117844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.117875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.118013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.118044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.118233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.118266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.118439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.118470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.118675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.118707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.118824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.118855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.119032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.119064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.119272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.119306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.119496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.119527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.119667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.119699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.119911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.119942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.120210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.120242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.120511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.120542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.120666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.120698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.120930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.120961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.121226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.121259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.121498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.121530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.121718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.121750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.121966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.121997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.122182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.122236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.122448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.122479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.122744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.122776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.123043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.123074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.123286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.123319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.123512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.577 [2024-11-19 10:58:17.123544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.577 qpair failed and we were unable to recover it. 00:30:27.577 [2024-11-19 10:58:17.123714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.123746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.123868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.123900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.124005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.124037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.124164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.124195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.124378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.124410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.124516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.124548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.124727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.124759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.125000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.125031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.125274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.125318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.125524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.125558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.125701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.125733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.125901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.125932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.126175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.126219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.126463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.126495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.126734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.126766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.126952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.126984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.127230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.127264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.127511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.127541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.127727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.127759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.128021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.128053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.128175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.128215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.128408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.128449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.128583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.128615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.128853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.128884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.128994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.129026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.129149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.129181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.129313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.129346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.129556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.129588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.129775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.129806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.129912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.129944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.130098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.130128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.130370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.130404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.130645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.130676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.130804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.130836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.131017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.131047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.131185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.578 [2024-11-19 10:58:17.131224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.578 qpair failed and we were unable to recover it. 00:30:27.578 [2024-11-19 10:58:17.131420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.131452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.131646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.131676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.131794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.131826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.132070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.132102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.132377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.132410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.132675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.132706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.132833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.132865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.133106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.133137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.133258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.133290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.133415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.133446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.133718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.133749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.133879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.133909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.134108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.134144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.134422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.134454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.134634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.134667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.134854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.134886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.135056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.135086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.135214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.135248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.135534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.135565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.135693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.135725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.135987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.136019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.136158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.136190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.136313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.136345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.136588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.136620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.136807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.136838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.137106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.137144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.137394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.137427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.137600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.137632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.137818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.137849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.138111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.138143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.138330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.138364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.138497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.138528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.138697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.579 [2024-11-19 10:58:17.138729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.579 qpair failed and we were unable to recover it. 00:30:27.579 [2024-11-19 10:58:17.138846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.138877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.139083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.139115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.139334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.139367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.139605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.139636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.139807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.139839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.140019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.140049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.140243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.140277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.140453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.140485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.140661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.140693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.140882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.140913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.141095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.141127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.141381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.141414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.141693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.141724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.141912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.141944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.142119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.142150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.142335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.142367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.142575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.142607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.142718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.142750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.142942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.142973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.143262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.143306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.143526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.143559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.143773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.143806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.144057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.144089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.144276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.144311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.144532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.144563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.144739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.144771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.144882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.144914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.145099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.145131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.145378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.145412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.145597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.145630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.145753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.145785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.146004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.146035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.146227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.146269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.146485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.146517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.146783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.146815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.147074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.147107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.147241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.580 [2024-11-19 10:58:17.147274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.580 qpair failed and we were unable to recover it. 00:30:27.580 [2024-11-19 10:58:17.147544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.147576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.147762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.147795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.148011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.148042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.148174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.148218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.148404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.148436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.148699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.148730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.148967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.148999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.149128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.149161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.149386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.149419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.149676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.149709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.149974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.150006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.150195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.150239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.150370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.150402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.150522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.150553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.150793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.150825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.150958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.150989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.151119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.151152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.151371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.151404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.151649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.151681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.151923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.151954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.152140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.152172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.152323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.152359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.152581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.152622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.152746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.152777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.152958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.152990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.153161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.153193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.153469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.153501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.153696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.153727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.153848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.153881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.154092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.154123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.154311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.154345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.154468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.154499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.154601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.154632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.154753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.154784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.154955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.154988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.155271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.155304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.155448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.155480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.155670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.581 [2024-11-19 10:58:17.155702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.581 qpair failed and we were unable to recover it. 00:30:27.581 [2024-11-19 10:58:17.155894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.155924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.156159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.156191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.156372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.156404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.156521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.156553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.156728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.156759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.156934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.156965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.157153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.157184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.157365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.157397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.157583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.157614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.157745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.157777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.157905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.157936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.158177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.158227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.158520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.158551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.158739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.158770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.158983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.159014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.159218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.159251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.159452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.159484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.159596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.159627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.159892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.159924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.160101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.160132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.160343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.160376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.160620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.160652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.160844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.160876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.161137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.161169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.161360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.161392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.161581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.161614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.161740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.161772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.161955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.161987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.162224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.162258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.162470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.162502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.162676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.162708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.162810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.162843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.163055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.163086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.163285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.163318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.163507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.163539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.163652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.163683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.163836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.163867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.164082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.582 [2024-11-19 10:58:17.164114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.582 qpair failed and we were unable to recover it. 00:30:27.582 [2024-11-19 10:58:17.164421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.164460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.164656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.164687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.164885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.164917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.165183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.165223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.165466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.165498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.165633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.165664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.165912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.165945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.166129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.166160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.166338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.166370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.166619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.166651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.166914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.166946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.167064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.167094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.167379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.167412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.167689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.167721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.167908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.167939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.168124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.168156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.168432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.168465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.168637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.168667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.168841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.168873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.169065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.169097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.169339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.169373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.169577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.169608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.169807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.169839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.170088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.170120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.170303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.170337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.170537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.170568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.170838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.170870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.171008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.171045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.171170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.171217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.171404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.583 [2024-11-19 10:58:17.171436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.583 qpair failed and we were unable to recover it. 00:30:27.583 [2024-11-19 10:58:17.171699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.171731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.171850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.171882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.172102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.172134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.172323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.172356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.172624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.172657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.172841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.172872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.172989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.173021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.173217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.173249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.173426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.173458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.173599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.173630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.173803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.173835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.174037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.174069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.174278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.174312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.174500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.174531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.174772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.174804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.174925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.174956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.175224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.175256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.175517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.175548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.175814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.175846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.176032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.176063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.176316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.176350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.176592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.176623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.176798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.176829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.177139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.177170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.177455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.177488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.177685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.177716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.584 [2024-11-19 10:58:17.177969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.584 [2024-11-19 10:58:17.178001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.584 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.178199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.178261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.178525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.178557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.178734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.178765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.178890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.178922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.179137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.179168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.179367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.179400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.179578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.179608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.179850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.179882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.180069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.180100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.180362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.180394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.180511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.180542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.180723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.180760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.180889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.180920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.181177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.181222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.181431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.181463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.181586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.181618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.181818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.181849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.182099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.182130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.182370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.182404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.182592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.182623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.182888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.182921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.183119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.183149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.183338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.183370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.183485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.183516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.183780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.183812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.184078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.184109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.184222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.184255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.184493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.184524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.585 qpair failed and we were unable to recover it. 00:30:27.585 [2024-11-19 10:58:17.184741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.585 [2024-11-19 10:58:17.184773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.185013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.185044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.185224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.185257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.185459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.185490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.185674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.185706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.185963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.185994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.186168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.186199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.186357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.186388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.186570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.186602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.186811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.186842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.187080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.187116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.187324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.187357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.187539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.187570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.187863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.187895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.188135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.188167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.188356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.188389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.188632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.188664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.188926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.188957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.189134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.189166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.189370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.189403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.189511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.189542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.189809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.189841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.190020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.190051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.190182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.190233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.190507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.190540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.190751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.190782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.190903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.190935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.191118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.191149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.191332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.191364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.586 qpair failed and we were unable to recover it. 00:30:27.586 [2024-11-19 10:58:17.191807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.586 [2024-11-19 10:58:17.191844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.192028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.192071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.192335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.192371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.192634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.192665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.192903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.192935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.193051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.193083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.193258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.193290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.193493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.193524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.193738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.193778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.193952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.193983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.194228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.194261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.194522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.194554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.194737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.194768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.194973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.195004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.195192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.195234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.195427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.195458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.195575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.195607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.195783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.195814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.195985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.196017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.196289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.196322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.196509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.196540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.196797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.196829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.197036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.197068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.197255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.197287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.197475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.197507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.197760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.197792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.198024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.198055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.198272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.198306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.198492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.198523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.198725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.198758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.587 [2024-11-19 10:58:17.198891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.587 [2024-11-19 10:58:17.198922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.587 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.199162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.199194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.199456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.199488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.199679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.199710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.199831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.199862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.200075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.200107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.200295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.200329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.200535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.200567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.200809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.200840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.200976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.201008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.201286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.201320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.201444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.201476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.201602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.201633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.201874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.201905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.202176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.202216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.202434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.202465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.202656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.202688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.202977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.203009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.203209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.203241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.203420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.203452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.203662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.203694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.203868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.203899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.204016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.204048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.204229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.204262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.204389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.204420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.204671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.204703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.204973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.205005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.205184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.205226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-11-19 10:58:17.205422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.588 [2024-11-19 10:58:17.205453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.205665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.205696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.205822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.205852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.206108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.206140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.206399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.206433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.206626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.206658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.206837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.206869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.206986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.207018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.207267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.207302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.207420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.207451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.207664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.207696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.207976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.208008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.208130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.208163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.208422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.208456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.208698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.208730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.208933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.208965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.209078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.209110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.209280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.209312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.209604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.209641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.209880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.209912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.210101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.210132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.210421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.210454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.210664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.210696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.210817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.210850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.211053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.211084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.211270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.211304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.211485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.211517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-11-19 10:58:17.211756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.589 [2024-11-19 10:58:17.211788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.212027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.212058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.212180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.212220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.212457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.212490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.212606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.212636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.212758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.212791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.213050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.213081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.213263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.213297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.213491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.213523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.213693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.213725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.213861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.213892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.214062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.214094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.214275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.214308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.214445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.214476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.214656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.214688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.214857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.214888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.215057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.215090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.215329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.215362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.215602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.215639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.215895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.215927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.216227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.216259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.216462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.216494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.216667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.216698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.216874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.216905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.217080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.217112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-11-19 10:58:17.217379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.590 [2024-11-19 10:58:17.217413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.217547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.217577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.217806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.217838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.218023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.218055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.218188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.218246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.218436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.218468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.218640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.218673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.218876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.218907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.219147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.219180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.219313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.219345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.219608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.219640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.219893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.219924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.220136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.220168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.220390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.220423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.220606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.220637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.220807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.220838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.221075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.221107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.221321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.221354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.221497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.221528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.221645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.221677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.221941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.221983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.222098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.222130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.222254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.222287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.222468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.222500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.222707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.222738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.222874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.222906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.223043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.223075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.591 qpair failed and we were unable to recover it. 00:30:27.591 [2024-11-19 10:58:17.223248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.591 [2024-11-19 10:58:17.223281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.223454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.223485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.223595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.223628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.223840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.223871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.224059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.224091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.224277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.224311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.224425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.224457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.224582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.224615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.224786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.224818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.225011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.225043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.225222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.225254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.225443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.225475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.225670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.225701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.225946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.225978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.226151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.226181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.226499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.226532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.226702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.226732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.227016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.227048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.227165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.227196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.227397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.227428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.227616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.227648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.227832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.227864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.228035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.228067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.228303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.228336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.228543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.228575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.228779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.228811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.229054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.229085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.229272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.229305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.229476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.229507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.592 qpair failed and we were unable to recover it. 00:30:27.592 [2024-11-19 10:58:17.229681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.592 [2024-11-19 10:58:17.229713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.229972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.230003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.230174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.230214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.230459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.230490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.230673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.230705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.230876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.230913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.231091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.231123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.231258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.231291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.231550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.231581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.231761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.231794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.231978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.232008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.232180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.232221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.232404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.232435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.232618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.232649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.232784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.232815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.232933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.232965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.233211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.233243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.233370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.233402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.233661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.233692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.233801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.233833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.233945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.233976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.234188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.234249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.234490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.234522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.234655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.234686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.234882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.234913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.235031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.235063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.235303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.235337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.235511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.235542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.235715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.235747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.235943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.235974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.236143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.236175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.236366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.236398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.236579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.236616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.236806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.236836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.236939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.593 [2024-11-19 10:58:17.236971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.593 qpair failed and we were unable to recover it. 00:30:27.593 [2024-11-19 10:58:17.237178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.237217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.237403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.237435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.237556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.237588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.237758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.237789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.237973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.238004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.238121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.238154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.238342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.238374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.238549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.238581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.238844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.238876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.239058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.239090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.239275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.239314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.239435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.239467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.239640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.239672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.239856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.239887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.240147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.240178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.240371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.240403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.240667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.240699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.240948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.240979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.241155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.241187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.241390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.241422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.241624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.241655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.241923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.241955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.242250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.242284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.242403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.242434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.242604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.242641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.242834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.242865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.243053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.243084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.243223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.243256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.243521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.243553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.243662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.243693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.594 [2024-11-19 10:58:17.243864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.594 [2024-11-19 10:58:17.243896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.594 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.244092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.244123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.244305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.244338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.244532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.244563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.244666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.244698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.244848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.244880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.245049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.245080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.245339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.245372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.245513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.245545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.245729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.245760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.245943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.245975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.246193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.246234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.246500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.246532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.246738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.246769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.246942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.246974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.247111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.247143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.247339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.247372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.247565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.247595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.247701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.247733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.247946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.247977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.248242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.248276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.248395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.248424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.248648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.248678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.248870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.248899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.249021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.249050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.249242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.249273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.249457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.595 [2024-11-19 10:58:17.249486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.595 qpair failed and we were unable to recover it. 00:30:27.595 [2024-11-19 10:58:17.249595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.249624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.249799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.249829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.250077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.250106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.250290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.250322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.250505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.250534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.250708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.250738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.250914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.250944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.251081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.251111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.251382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.251414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.251667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.251697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.251877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.251907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.252025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.252054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.252188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.252228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.252419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.252449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.252636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.252666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.252866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.252895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.253025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.253055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.253166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.253196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.253337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.253368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.253539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.253570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.253744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.253774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.254088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.254118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.254231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.254263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.254464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.254496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.254768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.254800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.254924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.254955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.255194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.255236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.596 qpair failed and we were unable to recover it. 00:30:27.596 [2024-11-19 10:58:17.255353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.596 [2024-11-19 10:58:17.255383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.255570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.255602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.255727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.255758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.256021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.256052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.256233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.256267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.256383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.256415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.256526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.256558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.256673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.256705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.256831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.256869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.257085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.257117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.257232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.257265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.257453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.257485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.257665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.257697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.257891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.257923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.258097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.258128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.258316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.258350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.258484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.258516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.258798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.258830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.259023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.259053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.259319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.259352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.259469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.259501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.259690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.259723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.259856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.259887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.260059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.260091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.260272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.260305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.260419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.260450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.260663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.260696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.260882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.260913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.261092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.261123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.261247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.261280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.261464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.261496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.261759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.261791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.261975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.262007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.262142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.262174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.262435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.262503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.597 [2024-11-19 10:58:17.262653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.597 [2024-11-19 10:58:17.262698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.597 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.262820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.262853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.263038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.263069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.263198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.263246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.263432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.263464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.263656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.263688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.263955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.263986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.264180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.264221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.264466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.264498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.264683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.264714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.264976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.265009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.265222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.265254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.265465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.265497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.265634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.265666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.265803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.265836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.266103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.266134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.266267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.266300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.266481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.266512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.266692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.266723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.266900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.266931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.267148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.267179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.267405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.267437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.267651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.267682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.267871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.267902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.268094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.268125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.268329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.268361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.268552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.268583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.268923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.268995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.269245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.269285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.269483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.269518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.269763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.269795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.270100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.270132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.270260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.270293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.270426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.270458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.270593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.270624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.270752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.270784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.270957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.270989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.598 [2024-11-19 10:58:17.271360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.598 [2024-11-19 10:58:17.271394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.598 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.271682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.271714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.271958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.271990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.272192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.272255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.272448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.272481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.272690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.272722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.272988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.273019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.273282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.273315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.273499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.273532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.273659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.273691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.273811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.273843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.274033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.274066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.274309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.274343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.274558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.274590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.274781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.274813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.275025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.275057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.275296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.275329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.275588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.275621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.275865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.275897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.276162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.276195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.276447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.276479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.276608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.276640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.276831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.276863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.277132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.277163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.277362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.277394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.277644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.277676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.277879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.277910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.278154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.278186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.278334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.278366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.278493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.278525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.278796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.278828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.278968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.279001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.279132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.279165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.279302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.279334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.279450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.279482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.279744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.279777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.280026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.280057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.280229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.280263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.280403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.280435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.280604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.280636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.280756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.280787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.599 qpair failed and we were unable to recover it. 00:30:27.599 [2024-11-19 10:58:17.281030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.599 [2024-11-19 10:58:17.281062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.281304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.281337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.281452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.281489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.281671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.281703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.281960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.281991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.282184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.282232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.282354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.282386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.282583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.282616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.282861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.282892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.283072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.283104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.283363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.283396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.283523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.283554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.283813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.283845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.283961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.283992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.284230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.284263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.284501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.284533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.284675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.284708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.284886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.284917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.285032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.285065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.285325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.285358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.285607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.285638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.285855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.285887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.286086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.286117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.286310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.286343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.286528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.286559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.286696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.286729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.286867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.286899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.287140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.287172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.287348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.287418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.287688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.287758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.287960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.287996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.288128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.288161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.288309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.288343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.288589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.288620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.288730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.600 [2024-11-19 10:58:17.288761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.600 qpair failed and we were unable to recover it. 00:30:27.600 [2024-11-19 10:58:17.288951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.288982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.289174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.289215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.289401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.289432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.289622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.289653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.289911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.289943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.290140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.290172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.290378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.290418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.290609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.290642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.290846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.290880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.291068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.291099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.291299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.291333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.291575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.291606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.291741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.291773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.292036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.292068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.292306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.292339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.292598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.292630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.292808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.292839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.293016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.293047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.293224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.293256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.293388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.293420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.293680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.293712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.293962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.293993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.294181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.294226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.294425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.294456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.294658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.294689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.294822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.294854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.295097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.295129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.295324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.295358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.295538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.295568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.295758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.295790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.295981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.296013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.296124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.296156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.296346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.296378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.296612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.296644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.296816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.296854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.296987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.297018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.297266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.297299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.297472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.297504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.297676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.297707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.297840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.297871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.298125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.298158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.298341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.298373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.298610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.298643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.601 qpair failed and we were unable to recover it. 00:30:27.601 [2024-11-19 10:58:17.298827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.601 [2024-11-19 10:58:17.298858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.299034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.299065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.299256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.299290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.299409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.299441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.299679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.299712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.299894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.299926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.300048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.300081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.300301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.300334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.300521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.300552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.300744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.300776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.300889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.300920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.301040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.301072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.301262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.301295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.301486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.301518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.301768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.301800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.301980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.302012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.302276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.302310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.302525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.302557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.302760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.302792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.303059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.303091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.303223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.303255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.303445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.303477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.303715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.303747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.303856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.303887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.304077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.304109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.304294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.304327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.304507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.304539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.304719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.304750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.304931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.304963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.305152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.305184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.305405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.305436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.305704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.305747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.306000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.306031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.306152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.306184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.306372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.306403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.306581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.306612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.306882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.306914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.307047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.307077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.307352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.307385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.307528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.307560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.307684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.307715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.307955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.307987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.308170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.308208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.308385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.308417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.602 qpair failed and we were unable to recover it. 00:30:27.602 [2024-11-19 10:58:17.308685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.602 [2024-11-19 10:58:17.308716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.308927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.308959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.309228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.309261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.309382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.309413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.309587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.309619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.309818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.309849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.310114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.310146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.310402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.310435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.310688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.310719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.310919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.310950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.311133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.311164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.311367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.311399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.311651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.311682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.311816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.311847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.311975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.312007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.312194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.312235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.312490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.312522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.312654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.312685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.312872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.312903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.313144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.313175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.313376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.313407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.313627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.313659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.313917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.313949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.314138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.314169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.314389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.314422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.314619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.314651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.314914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.314945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.315067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.315105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.315319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.315352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.315602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.315633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.315892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.315924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.316044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.316075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.316333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.316366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.316499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.316530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.316637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.316669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.316924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.316956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.317221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.317252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.317407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.317443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.317734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.317784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.603 [2024-11-19 10:58:17.318109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.603 [2024-11-19 10:58:17.318181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.603 qpair failed and we were unable to recover it. 00:30:27.888 [2024-11-19 10:58:17.318401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.888 [2024-11-19 10:58:17.318438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.888 qpair failed and we were unable to recover it. 00:30:27.888 [2024-11-19 10:58:17.318647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.888 [2024-11-19 10:58:17.318680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.888 qpair failed and we were unable to recover it. 00:30:27.888 [2024-11-19 10:58:17.318876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.318909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.319099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.319131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.319307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.319341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.319587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.319619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.319806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.319838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.320081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.320113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.320369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.320402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.320663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.320696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.320877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.320908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.321104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.321136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.321376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.321409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.321602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.321634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.321818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.321861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.321981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.322013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.322150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.322181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.322439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.322472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.322589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.322620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.322806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.322838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.323025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.323057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.323192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.323237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.323411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.323442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.323692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.323725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.323914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.323946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.324132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.324164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.324437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.324470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.324600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.324632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.324925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.324957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.325163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.325195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.325457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.325488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.889 qpair failed and we were unable to recover it. 00:30:27.889 [2024-11-19 10:58:17.325678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.889 [2024-11-19 10:58:17.325711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.325952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.325983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.326160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.326192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.326471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.326503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.326634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.326666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.326786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.326818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.327006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.327038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.327217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.327250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.327436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.327468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.327696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.327727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.327910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.327947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.328192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.328235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.328505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.328538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.328747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.328779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.329015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.329047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.329309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.329343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.329546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.329578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.329760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.329793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.329921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.329952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.330126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.330158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.330436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.330469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.330655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.330687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.330948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.330980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.331105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.331136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.331338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.331371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.331556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.331587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.331773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.331805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.332045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.332076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.332257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.332290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.890 [2024-11-19 10:58:17.332427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.890 [2024-11-19 10:58:17.332458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.890 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.332702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.332734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.332912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.332944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.333112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.333145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.333420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.333452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.333593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.333625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.333880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.333912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.334082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.334114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.334381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.334415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.334618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.334650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.334916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.334947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.335078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.335110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.335239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.335271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.335453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.335485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.335601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.335632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.335751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.335783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.335900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.335931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.336138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.336170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.336286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.336319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.336453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.336484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.336717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.336750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.336857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.336888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.337030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.337077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.337276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.337310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.337528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.337559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.337699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.337731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.337922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.337953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.891 qpair failed and we were unable to recover it. 00:30:27.891 [2024-11-19 10:58:17.338124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.891 [2024-11-19 10:58:17.338155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.338367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.338401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.338642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.338674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.338860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.338891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.339061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.339093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.339231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.339264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.339449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.339480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.339661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.339693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.339884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.339924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.340121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.340153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.340428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.340460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.340645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.340678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.340868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.340899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.341079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.341111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.341300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.341333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.341519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.341550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.341732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.341765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.341949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.341980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.342231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.342264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.342451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.342482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.342669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.342701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.342813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.342844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.343044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.343076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.343265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.343298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.343433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.343465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.343686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.343718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.343848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.343879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.344061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.344093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.344333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.892 [2024-11-19 10:58:17.344366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.892 qpair failed and we were unable to recover it. 00:30:27.892 [2024-11-19 10:58:17.344551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.344582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.344764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.344796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.345007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.345039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.345169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.345213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.345402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.345433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.345556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.345589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.345808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.345881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.346092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.346127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.346257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.346293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.346555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.346588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.346858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.346889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.347152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.347185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.347337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.347369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.347552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.347584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.347717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.347749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.347935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.347968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.348174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.348213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.348476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.348509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.348636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.348668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.348790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.348822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.349091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.349122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.349241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.349275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.349512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.349543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.349727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.349759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.349879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.349910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.350027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.350058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.350329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.350363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.350555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.893 [2024-11-19 10:58:17.350587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.893 qpair failed and we were unable to recover it. 00:30:27.893 [2024-11-19 10:58:17.350760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.350792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.351079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.351111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.351246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.351279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.351538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.351569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.351761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.351793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.352004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.352041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.352288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.352321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.352535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.352567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.352747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.352779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.353024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.353056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.353235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.353268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.353485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.353516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.353693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.353725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.353920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.353951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.354143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.354174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.354311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.354343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.354469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.354501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.354689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.354721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.354981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.355013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.355198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.355242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.355410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.355443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.355635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.355666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.355851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.355883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.356053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.356084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.356272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.356305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.356515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.356546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.356726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.356758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.357008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.357039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.357235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.357268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.357510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.357541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.357797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.894 [2024-11-19 10:58:17.357829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.894 qpair failed and we were unable to recover it. 00:30:27.894 [2024-11-19 10:58:17.358089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.358120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.358303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.358342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.358447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.358479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.358723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.358754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.359038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.359070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.359238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.359271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.359538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.359569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.359677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.359709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.359905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.359936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.360133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.360165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.360411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.360443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.360629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.360660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.360838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.360869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.361002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.361033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.361275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.361308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.361495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.361527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.361731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.361762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.361998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.362030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.362228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.362261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.362460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.362492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.362735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.362767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.362883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.362915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.363187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.363232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.363423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.363455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.363647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.363678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.363823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.363855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.364036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.364067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.364272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.895 [2024-11-19 10:58:17.364305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.895 qpair failed and we were unable to recover it. 00:30:27.895 [2024-11-19 10:58:17.364513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.364544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.364735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.364768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.365023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.365053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.365160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.365192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.365378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.365410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.365647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.365678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.365854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.365887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.365993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.366024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.366249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.366283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.366389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.366421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.366597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.366629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.366742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.366774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.366914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.366946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.367135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.367166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.367313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.367347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.367532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.367565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.367683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.367715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.367891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.367922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.368039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.368072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.368251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.368284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.368457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.368488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.368684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.368715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.368893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.368924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.369109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.369142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.369285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.369318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.369449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.369482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.369609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.369640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.369830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.369862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.370002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.370034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.896 qpair failed and we were unable to recover it. 00:30:27.896 [2024-11-19 10:58:17.370244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.896 [2024-11-19 10:58:17.370278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.370449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.370481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.370723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.370756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.370941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.370973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.371176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.371216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.371459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.371490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.371638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.371670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.371778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.371809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.371999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.372032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.372225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.372259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.372465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.372496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.372677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.372710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.372909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.372947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.373137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.373169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.373297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.373330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.373512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.373545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.373668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.373699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.373833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.373865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.374109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.374143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.374284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.374319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.374559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.374590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.374776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.374808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.374940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.374971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.375145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.375177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.375371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.375404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.375527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.375558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.375757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.375790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.375966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.375997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.376122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.376154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-11-19 10:58:17.376350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.897 [2024-11-19 10:58:17.376383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.376508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.376541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.376745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.376777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.376980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.377012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.377141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.377172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.377421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.377454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.377637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.377669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.377888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.377919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.378215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.378248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.378489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.378521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.378627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.378664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.378839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.378871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.379127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.379159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.379289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.379321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.379442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.379474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.379727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.379758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.379941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.379973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.380089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.380121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.380300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.380334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.380521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.380553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.380718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.380751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.380876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.380908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.381092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.381124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.381238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.381271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.381517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.381549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.381678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.381711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.381881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.381912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.382043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.382075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.382193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.382248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-11-19 10:58:17.382354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.898 [2024-11-19 10:58:17.382385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.382624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.382657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.382830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.382860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.383045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.383076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.383188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.383227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.383357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.383389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.383510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.383541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.383660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.383691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.383880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.383916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.384092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.384124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.384238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.384271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.384399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.384430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.384556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.384587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.384769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.384801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.384984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.385016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.385199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.385240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.385514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.385546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.385736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.385768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.386038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.386070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.386259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.386293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.386414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.386445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.386637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.386669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.386939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.387010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.387175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.387223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.387362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.387394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.387521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.387553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.387742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.387774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.387898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.387929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.899 qpair failed and we were unable to recover it. 00:30:27.899 [2024-11-19 10:58:17.388113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.899 [2024-11-19 10:58:17.388145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.388327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.388362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.388553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.388585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.388775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.388807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.388995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.389027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.389222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.389256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.389448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.389480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.389608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.389653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.389826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.389858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.390042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.390074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.390216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.390249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.390368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.390400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.390641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.390673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.390850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.390881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.391059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.391090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.391225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.391258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.391440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.391471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.391711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.391742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.392007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.392040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.392225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.392257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.392454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.392486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.392733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.392764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.393003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.393035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.393212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.393244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.393429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.393461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.393665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.393696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.393868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.393899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.394168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.394221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.394343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.900 [2024-11-19 10:58:17.394374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.900 qpair failed and we were unable to recover it. 00:30:27.900 [2024-11-19 10:58:17.394553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.394583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.394741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.394773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.395007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.395053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.395221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.395258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.395373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.395404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.395595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.395627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.395825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.395857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.396104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.396134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.396350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.396389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.396653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.396686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.396930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.396962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.397228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.397262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.397399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.397430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.397578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.397611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.397791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.397822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.398010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.398041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.398308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.398341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.398528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.398557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.398753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.398791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.398926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.398957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.399147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.399178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.399393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.399425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.399543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.399574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.399695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.399727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.399907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.399939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.400181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.400226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.400412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.400445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.400639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.400670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.400956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.400987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.401185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.401233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.401429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.401460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.401590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.401621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.401779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.401810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.402007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.402039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.402152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.402183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.402375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.402407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.402588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.402619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.901 qpair failed and we were unable to recover it. 00:30:27.901 [2024-11-19 10:58:17.402814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.901 [2024-11-19 10:58:17.402845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.402972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.403003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.403118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.403149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.403398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.403434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.403555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.403585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.403794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.403825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.404018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.404052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.404266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.404302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.404436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.404469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.404588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.404619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.404752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.404783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.404891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.404922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.405106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.405138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.405389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.405421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.405607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.405639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.405780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.405811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.405934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.405965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.406137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.406168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.406352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.406384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.406590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.406621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.406732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.406764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.406934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.406971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.407164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.407196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.407328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.407359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.407531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.407563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.407676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.407707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.407950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.407982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.408178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.408221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.408410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.408441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.408581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.408613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.408740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.408771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.408890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.408922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.409099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.409130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.409251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.409285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.409522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.409555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.409690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.409721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.409871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.409903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.410023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.902 [2024-11-19 10:58:17.410053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.902 qpair failed and we were unable to recover it. 00:30:27.902 [2024-11-19 10:58:17.410358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.410394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.410568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.410599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.410722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.410754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.410887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.410918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.411114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.411146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.411297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.411330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.411591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.411623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.411797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.411828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.412068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.412101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.412229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.412263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.412394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.412426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.412530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.412561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.412775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.412806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.412994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.413025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.413226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.413272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.413485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.413517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.413755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.413790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.413982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.414013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.414210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.414242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.414426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.414457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.414574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.414607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.414723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.414753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.414938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.414970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.415101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.415138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.415257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.415291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.415408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.415440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.415611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.415643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.415836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.415867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.416132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.416163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.416363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.416397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.416523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.416554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.416738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.416769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.417008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.417040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.903 [2024-11-19 10:58:17.417290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.903 [2024-11-19 10:58:17.417323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.903 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.417449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.417480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.417669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.417701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.417988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.418019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.418210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.418243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.418375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.418406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.418614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.418644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.418852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.418884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.419069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.419100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.419214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.419247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.419364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.419395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.419573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.419605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.419780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.419814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.420010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.420042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.420168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.420198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.420479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.420512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.420683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.420714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.420926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.420960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.421142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.421173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.421312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.421366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.421504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.421535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.421650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.421681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.421880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.421912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.422036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.422066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.422187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.422243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.422366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.422397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.422509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.422540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.422659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.422690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.422901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.422932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.904 [2024-11-19 10:58:17.423112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.904 [2024-11-19 10:58:17.423143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.904 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.423337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.423376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.423599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.423630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.423760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.423794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.423991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.424022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.424222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.424255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.424476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.424508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.424627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.424660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.424863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.424894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.425061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.425093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.425222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.425255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.425375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.425406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.425601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.425633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.425814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.425846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.426019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.426049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.426236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.426269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.426377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.426409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.426580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.426611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.426822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.426854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.426995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.427026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.427148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.427180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.427310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.427341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.427516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.427548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.427663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.427694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.427823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.427854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.905 [2024-11-19 10:58:17.427980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.905 [2024-11-19 10:58:17.428011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.905 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.428195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.428235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.428430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.428463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.428688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.428760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.428921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.428956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.429093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.429127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.429306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.429344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.429539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.429571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.429690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.429721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.429891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.429926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.430106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.430138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.430355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.430388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.430500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.430532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.430707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.430741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.430882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.430915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.431035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.431067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.431264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.431299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.431448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.431482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.431678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.431710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.431825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.431856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.432023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.432056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.432233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.432266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.432505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.432537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.432713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.432745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.432933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.432964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.433074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.433107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.433350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.433385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.433521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.433553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.906 [2024-11-19 10:58:17.433741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.906 [2024-11-19 10:58:17.433772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.906 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.433956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.433988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.434180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.434233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.434407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.434439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.434612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.434644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.434817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.434848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.435024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.435061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.435323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.435357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.435484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.435515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.435639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.435670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.435862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.435894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.436018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.436050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.436165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.436197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.436328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.436360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.436481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.436512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.436628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.436660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.436919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.436953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.437085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.437116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.437312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.437346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.437523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.437555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.437714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.437786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.437985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.438020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.438221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.438260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.438436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.438468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.438744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.438778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.439024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.439056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.439300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.439334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.439524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.439556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.907 qpair failed and we were unable to recover it. 00:30:27.907 [2024-11-19 10:58:17.439813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.907 [2024-11-19 10:58:17.439845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.439952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.439993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.440181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.440223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.440409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.440442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.440557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.440588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.440701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.440734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.440922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.440954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.441091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.441123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.441255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.441288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.441467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.441498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.441752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.441784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.441892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.441923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.442028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.442059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.442187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.442243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.442370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.442402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.442612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.442644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.442778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.442809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.442947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.442979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.443102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.443133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.443371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.443405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.443612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.443644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.443827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.443859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.444046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.444078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.444196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.444238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.444426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.444457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.444654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.444687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.444928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.444959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.908 [2024-11-19 10:58:17.445148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.908 [2024-11-19 10:58:17.445180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.908 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.445304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.445336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.445544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.445576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.445791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.445823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.445999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.446031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.446221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.446257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.446365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.446395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.446564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.446594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.446784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.446817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.446926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.446958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.447215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.447249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.447423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.447453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.447700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.447733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.447927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.447958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.448088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.448125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.448227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.448261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.448370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.448401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.448591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.448623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.448796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.448827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.449022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.449055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.449154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.449186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.449379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.449411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.449596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.449628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.449816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.449848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.449965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.449997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.450267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.450300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.450478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.450509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.450622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.450655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.450843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.450875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.451065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.451098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.451285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.451318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.909 [2024-11-19 10:58:17.451594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.909 [2024-11-19 10:58:17.451626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.909 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.451752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.451783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.451891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.451922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.452028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.452060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.452274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.452309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.452446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.452477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.452596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.452628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.452744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.452775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.452948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.452980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.453161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.453193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.453468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.453501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.453618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.453649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.453791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.453822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.454065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.454096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.454219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.454252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.454516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.454547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.454735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.454768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.454912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.454942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.455149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.455181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.455377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.455410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.455649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.455681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.455802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.455833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.456009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.456040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.456227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.456265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.456454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.456486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.456603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.456634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.456830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.456862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.456967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.456998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.457173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.910 [2024-11-19 10:58:17.457231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.910 qpair failed and we were unable to recover it. 00:30:27.910 [2024-11-19 10:58:17.457357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.457388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.457557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.457590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.457697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.457727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.457901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.457932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.458189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.458232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.458418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.458450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.458584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.458615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.458787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.458819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.458946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.458977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.459100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.459132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.459320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.459354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.459570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.459602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.459773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.459804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.460010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.460041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.460223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.460256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.460477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.460509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.460761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.460794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.460899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.460929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.461099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.461130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.461380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.461415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.461614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.461645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.461851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.461884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.462122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.462154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.462370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.462403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.462591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.462623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.462800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.462832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.463080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.911 [2024-11-19 10:58:17.463111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.911 qpair failed and we were unable to recover it. 00:30:27.911 [2024-11-19 10:58:17.463230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.463263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.463471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.463503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.463714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.463746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.463916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.463947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.464062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.464095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.464211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.464243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.464359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.464391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.464571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.464608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.464739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.464771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.465012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.465044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.465147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.465178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.465377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.465409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.465594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.465626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.465809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.465841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.466036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.466068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.466174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.466215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.466479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.466512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.466628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.466659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.466860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.466891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.467058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.467088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.467269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.467303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.467599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.467630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.467841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.467872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.467989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.468020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.468139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.912 [2024-11-19 10:58:17.468171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.912 qpair failed and we were unable to recover it. 00:30:27.912 [2024-11-19 10:58:17.468448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.468479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.468682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.468714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.468907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.468938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.469144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.469175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.469387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.469419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.469601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.469634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.469903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.469934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.470064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.470096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.470280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.470313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.470440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.470471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.470588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.470620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.470809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.470841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.470969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.471000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.471108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.471139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.471377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.471411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.471604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.471635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.471852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.471884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.472077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.472109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.472285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.472318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.472445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.472477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.472737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.472768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.472885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.472917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.473091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.473128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.473301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.473334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.473523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.473554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.473771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.473802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.473936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.473967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.474095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.474127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.474268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.474301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.474408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.474438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.474567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.474599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.913 qpair failed and we were unable to recover it. 00:30:27.913 [2024-11-19 10:58:17.474784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.913 [2024-11-19 10:58:17.474816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.475010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.475041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.475138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.475169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.475394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.475428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.475604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.475634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.475813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.475845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.476128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.476159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.476373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.476405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.476602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.476633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.476823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.476854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.477115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.477145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.477346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.477379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.477500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.477531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.477699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.477731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.477970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.478001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.478106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.478137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.478417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.478450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.478717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.478749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.478968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.479000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.479122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.479153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.479422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.479455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.479631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.479662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.479784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.479815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.480006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.480037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.480221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.480254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.480540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.480571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.480786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.480818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.481075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.914 [2024-11-19 10:58:17.481107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.914 qpair failed and we were unable to recover it. 00:30:27.914 [2024-11-19 10:58:17.481284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.481317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.481510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.481541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.481650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.481682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.481812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.481848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.482128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.482160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.482363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.482396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.482609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.482640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.482764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.482795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.482923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.482955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.483140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.483171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.483369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.483402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.483642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.483673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.483910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.483941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.484158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.484190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.484326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.484357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.484561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.484593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.484834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.484866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.485090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.485122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.485243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.485276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.485449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.485480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.485677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.485708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.485889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.485920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.486052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.486085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.486211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.486242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.486428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.486460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.486576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.486608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.486809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.486841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.915 qpair failed and we were unable to recover it. 00:30:27.915 [2024-11-19 10:58:17.487102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.915 [2024-11-19 10:58:17.487134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.487253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.487286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.487422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.487452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.487663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.487695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.487807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.487838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.487970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.488002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.488133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.488164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.488329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.488362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.488547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.488578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.488701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.488733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.488998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.489030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.489175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.489216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.489428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.489460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.489630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.489662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.489773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.489803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.489934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.489966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.490103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.490139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.490335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.490368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.490506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.490538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.490730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.490763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.491025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.491057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.491198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.491241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.491441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.491472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.491698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.491729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.491935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.491966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.492158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.492190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.492342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.492374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.492484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.492516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.492758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.492791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.492921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.492953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.493141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.493172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.493402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.493434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.493554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.493585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.493710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.493743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.493926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.493957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.494081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.494113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.494243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.494276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.494470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.494502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.494744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.494775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.494886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.494918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.495030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.495062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.495177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.495219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.495334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.495365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.495621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.495693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.495851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.495887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.916 qpair failed and we were unable to recover it. 00:30:27.916 [2024-11-19 10:58:17.495997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.916 [2024-11-19 10:58:17.496031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.496247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.496283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.496396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.496428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.496618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.496649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.496774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.496806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.496928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.496958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.497214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.497248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.497365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.497397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.497517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.497549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.497789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.497820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.497996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.498029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.498155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.498187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.498455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.498488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.498664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.498695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.498885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.498916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.499119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.499151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.499282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.499315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.499514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.499546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.499669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.499701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.499828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.499860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.499979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.500011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.500191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.500237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.500475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.500507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.500716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.500747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.500921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.500953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.501155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.501193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.501338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.501370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.501501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.501534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.501644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.501675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.501796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.501827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.502028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.502060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.502256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.502289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.502412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.502445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.502624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.502656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.502847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.502878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.503009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.503040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.503154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.503185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.503387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.503419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.503524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.503555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.503741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.503774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.504036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.504067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.917 [2024-11-19 10:58:17.504334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.917 [2024-11-19 10:58:17.504368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.917 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.504552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.504584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.504777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.504809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.504998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.505029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.505131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.505162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.505355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.505388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.505575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.505607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.505801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.505833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.506077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.506109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.506312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.506347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.506467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.506498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.506680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.506717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.506837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.506868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.506998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.507030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.507143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.507174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.507373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.507405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.507537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.507568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.507782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.507814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.508000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.508032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.508223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.508256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.508371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.508402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.508604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.508636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.508912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.508944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.509048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.509080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.509270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.509304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.509488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.509519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.509633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.509665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.509839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.509870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.510041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.510073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.510217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.510249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.510373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.510404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.510547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.510579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.510769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.510803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.510988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.511020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.511259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.511292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.511504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.511535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.511679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.511711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.511950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.511980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.512115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.512153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.512333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.512365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.512531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.512564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.512744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.512776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.512957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.512989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.513157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.513188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.513367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.513400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.513576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.513608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.513714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.513746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.513867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.513898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.918 qpair failed and we were unable to recover it. 00:30:27.918 [2024-11-19 10:58:17.514136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.918 [2024-11-19 10:58:17.514169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.514308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.514341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.514469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.514502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.514616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.514647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.514904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.514974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.515109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.515142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.515351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.515385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.515496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.515527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.515768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.515799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.515923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.515955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.516213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.516246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.516379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.516410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.516528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.516559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.516690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.516722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.516931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.516962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.517173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.517212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.517330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.517362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.517558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.517605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.517728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.517759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.517948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.517981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.518167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.518197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.518324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.518356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.518460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.518492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.518613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.518643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.518757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.518788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.518908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.518939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.519058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.519089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.519218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.519250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.519373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.519404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.519605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.519636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.519743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.519774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.519894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.519924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.520042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.520073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.520183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.520226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.520403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.520434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.520539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.520571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.520685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.520716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.520824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.520854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.521022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.521053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.521225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.521257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.521430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.521461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.521593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.521624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.521800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.521832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.522023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.522053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.522163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.522198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.522411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.522444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.522554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.522585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.919 [2024-11-19 10:58:17.522705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.919 [2024-11-19 10:58:17.522737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.919 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.522860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.522891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.523022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.523053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.523255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.523289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.523396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.523427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.523618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.523650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.523845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.523877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.524071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.524103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.524224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.524256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.524521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.524552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.524669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.524702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.524829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.524861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.525064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.525096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.525276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.525308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.525443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.525475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.525659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.525690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.525823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.525854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.525972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.526003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.526123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.526155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.526298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.526330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.526540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.526572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.526747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.526778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.526891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.526922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.527031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.527062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.527238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.527278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.527462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.527493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.527612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.527644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.527760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.527792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.527963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.527994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.528181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.528220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.528330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.528362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.528533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.528563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.528735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.528767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.528881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.528912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.529033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.529066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.529182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.529223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.529348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.529380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.529555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.529586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.529703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.529735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.529841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.529872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.530061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.530093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.530192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.530253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.530378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.530410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.530533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.530564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.530745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.530777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.530951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.530982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.531164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.531196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.920 qpair failed and we were unable to recover it. 00:30:27.920 [2024-11-19 10:58:17.531324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.920 [2024-11-19 10:58:17.531356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.531458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.531489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.531662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.531692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.531804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.531836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.532021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.532058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.532194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.532236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.532476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.532507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.532622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.532654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.532781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.532812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.532912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.532944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.533052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.533084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.533257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.533290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.533412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.533444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.533557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.533590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.533720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.533750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.533866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.533898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.534072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.534103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.534241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.534275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.534389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.534421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.534528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.534560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.534664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.534695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.534868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.534900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.535072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.535103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.535275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.535308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.535409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.535441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.535572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.535604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.535787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.535818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.535943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.535976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.536219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.536252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.536449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.536482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.536612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.536643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.536777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.536815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.536927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.536957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.537094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.537125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.537324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.537357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.537541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.537572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.537681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.537713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.537839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.537870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.538041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.538073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.538188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.538227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.538349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.538381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.538495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.538526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.538638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.921 [2024-11-19 10:58:17.538670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.921 qpair failed and we were unable to recover it. 00:30:27.921 [2024-11-19 10:58:17.538803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.538834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.538950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.538982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.539164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.539195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.539388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.539421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.539546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.539577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.539770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.539802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.539920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.539952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.540054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.540086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.540209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.540241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.540413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.540445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.540563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.540594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.540710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.540742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.540847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.540878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.540999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.541032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.541137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.541169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.541370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.541403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.541525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.541557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.541737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.541769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.541872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.541903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.542124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.542156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.542311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.542343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.542516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.542548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.542672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.542703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.542805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.542837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.542957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.542988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.543101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.543133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.543250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.543284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.543466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.543498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.543677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.543709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.543886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.543918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.544109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.544141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.544264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.544296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.544414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.544446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.544564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.544595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.544710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.544741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.544930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.544961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.545074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.545106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.545403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.545437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.545561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.545592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.545784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.545816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.545993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.546025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.546193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.546245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.546375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.546407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.546600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.546631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.546750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.546782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.546956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.546988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.547162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.547193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.547330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.922 [2024-11-19 10:58:17.547362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.922 qpair failed and we were unable to recover it. 00:30:27.922 [2024-11-19 10:58:17.547494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.547525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.547701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.547733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.547907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.547939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.548112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.548144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.548262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.548294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.548532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.548564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.548682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.548714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.548841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.548872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.548981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.549019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.549199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.549243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.549359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.549390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.549560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.549592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.549698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.549730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.549859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.549890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.550096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.550128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.550244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.550276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.550393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.550425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.550668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.550699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.550825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.550858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.550972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.551004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.551125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.551156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.551351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.551384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.551505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.551537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.551646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.551677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.551788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.551819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.552006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.552038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.552222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.552255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.552374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.552407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.552510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.552541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.552663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.552695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.552814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.552845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.553047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.553080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.553355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.553388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.553593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.553624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.553816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.553849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.553959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.553995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.554171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.554212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.554414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.554445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.554563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.554596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.554811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.554843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.555041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.555072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.555259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.555293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.555522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.555553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.555793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.555824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.555941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.555972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.556089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.556120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.556238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.556270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.923 qpair failed and we were unable to recover it. 00:30:27.923 [2024-11-19 10:58:17.556454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.923 [2024-11-19 10:58:17.556486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.556601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.556634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.556757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.556788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.556890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.556921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.557037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.557069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.557234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.557265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.557392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.557424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.557615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.557647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.557751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.557782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.557954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.557985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.558097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.558130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.558308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.558343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.558526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.558557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.558740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.558772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.558896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.558928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.559100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.559132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.559257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.559290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.559405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.559437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.559672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.559703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.559823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.559855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.559988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.560020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.560124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.560155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.560300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.560332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.560507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.560539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.560655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.560686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.560871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.560902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.561077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.561107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.561237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.561269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.561463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.561494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.561743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.561813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.561965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.562000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.562114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.562147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.562338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.562371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.562641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.562673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.562876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.562908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.563091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.563122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.563312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.563346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.563461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.563493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.563732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.563764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.563908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.563941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.564116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.564148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.564289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.924 [2024-11-19 10:58:17.564323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.924 qpair failed and we were unable to recover it. 00:30:27.924 [2024-11-19 10:58:17.564509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.564551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.564817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.564849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.564986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.565018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.565193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.565237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.565346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.565378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.565502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.565533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.565725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.565757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.565888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.565919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.566105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.566136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.566252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.566285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.566402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.566434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.566551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.566583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.566824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.566856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.566987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.567019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.567137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.567170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.567303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.567339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.567443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.567475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.567593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.567624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.567746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.567777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.567891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.567923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.568096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.568126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.568243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.568278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.568395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.568426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.568549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.568581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.568755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.568786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.568907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.568939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.569114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.569145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.569282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.569322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.569426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.569458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.569643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.569674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.569807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.569838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.569947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.569979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.570173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.570233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.570406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.570438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.570623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.570654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.570828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.570861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.570991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.571021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.571136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.571167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.571309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.571342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.571451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.571482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.571599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.571629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.571761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.571794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.571907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.571937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.572112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.572144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.572280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.572312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.572555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.572586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.572701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.572732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.925 qpair failed and we were unable to recover it. 00:30:27.925 [2024-11-19 10:58:17.572841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.925 [2024-11-19 10:58:17.572873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.572979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.573010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.573129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.573161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.573343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.573374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.573489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.573521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.573657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.573689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.573801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.573832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.573938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.573975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.574153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.574185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.574301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.574332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.574571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.574603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.574801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.574832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.574939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.574971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.575186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.575242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.575456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.575489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.575616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.575647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.575817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.575848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.575982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.576013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.576184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.576225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.576396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.576428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.576552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.576584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.576706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.576738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.576914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.576946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.577121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.577152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.577283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.577315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.577438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.577469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.577594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.577626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.577795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.577826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.577948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.577979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.578079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.578109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.578242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.578275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.578381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.578413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.578517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.578548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.578651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.578682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.578798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.578829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.579010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.579042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.579164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.579195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.579398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.579430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.579552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.579583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.579699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.579731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.579844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.579875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.579981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.580013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.580118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.580149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.580327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.580360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.580474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.580505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.580634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.580666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.580839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.580871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.580978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.926 [2024-11-19 10:58:17.581011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.926 qpair failed and we were unable to recover it. 00:30:27.926 [2024-11-19 10:58:17.581252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.581323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.581452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.581488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.581688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.581722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.581927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.581960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.582067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.582099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.582271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.582305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.582424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.582456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.582563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.582594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.582768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.582800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.582924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.582956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.583168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.583200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.583334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.583366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.583547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.583580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.583697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.583740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.583959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.583991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.584164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.584195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.584447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.584480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.584656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.584687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.584866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.584898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.585141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.585172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.585301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.585337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.585457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.585489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.585598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.585630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.585744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.585775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.585962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.585994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.586181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.586223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.586414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.586446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.586564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.586596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.586771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.586802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.586982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.587013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.587129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.587161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.587319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.587353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.587525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.587557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.587674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.587707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.587912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.587943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.588054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.588086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.588223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.588256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.588425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.588456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.588629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.588660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.588837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.588870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.589053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.589090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.589278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.589311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.589501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.589532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.589747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.589779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.589904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.589935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.590041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.590072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.590257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.590290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.927 [2024-11-19 10:58:17.590411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.927 [2024-11-19 10:58:17.590442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.927 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.590624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.590656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.590847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.590878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.591064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.591097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.591229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.591262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.591469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.591500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.591689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.591721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.591928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.591960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.592135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.592166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.592375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.592407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.592536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.592568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.592693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.592724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.592839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.592871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.593055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.593086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.593270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.593303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.593476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.593508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.593625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.593656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.593768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.593800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.593922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.593952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.594159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.594191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.594377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.594410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.594538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.594570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.594691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.594722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.594855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.594887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.595152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.595183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.595389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.595421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.595613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.595644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.595751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.595783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.595955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.595986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.596166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.596198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.596314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.596346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.596468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.596500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.596771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.596803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.596917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.596949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.597090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.597121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.597315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.597349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.597535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.597566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.597814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.597846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.598032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.598062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.598192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.598243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.598364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.598395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.598573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.598604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.598709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.928 [2024-11-19 10:58:17.598741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.928 qpair failed and we were unable to recover it. 00:30:27.928 [2024-11-19 10:58:17.598934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.598966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.599180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.599222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.599328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.599361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.599476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.599507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.599684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.599715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.599911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.599943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.600213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.600247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.600465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.600496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.600627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.600660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.600903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.600934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.601114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.601145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.601280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.601313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.601501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.601533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.601715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.601747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.601918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.601950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.602138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.602170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.602308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.602342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.602463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.602494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.602599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.602637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.602776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.602808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.602990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.603022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.603221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.603253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.603455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.603487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.603660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.603690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.603800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.603833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.604073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.604104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.604292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.604326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.604435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.604467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.604707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.604739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.604844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.604876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.605004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.605035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.605146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.605178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.605314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.605346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.605569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.605601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.605721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.605752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.605919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.605951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.606069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.606101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.606226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.606259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.606364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.606396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.606599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.606631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.606737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.606768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.606948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.606981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.607102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.607134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.607253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.607286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.607471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.607503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.607617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.607661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.607781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.607812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.929 qpair failed and we were unable to recover it. 00:30:27.929 [2024-11-19 10:58:17.607983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.929 [2024-11-19 10:58:17.608016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.608191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.608249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.608418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.608449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.608562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.608594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.608768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.608799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.608974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.609006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.609211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.609244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.609357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.609389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.609497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.609528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.609706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.609738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.609854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.609885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.610005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.610037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.610156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.610188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.610326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.610357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.610531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.610563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.610680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.610712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.610820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.610851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.610971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.611003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.611122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.611154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.611339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.611372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.611493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.611525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.611636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.611669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.611774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.611805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.611912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.611944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.612112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.612143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.612269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.612303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.612424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.612456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.612559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.612590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.612765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.612797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.612903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.612934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.613172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.613214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.613394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.613427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.613542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.613574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.613699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.613730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.613854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.613885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.614067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.614099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.614279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.614312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.614418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.614449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.614563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.614596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.614792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.614823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.614929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.614962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.615076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.615108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.615280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.615313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.615487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.615518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.615706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.615738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.615917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.615948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.930 [2024-11-19 10:58:17.616063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.930 [2024-11-19 10:58:17.616095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.930 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.616278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.616311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.616435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.616466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.616580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.616612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.616795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.616826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.616954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.616985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.617098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.617129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.617317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.617357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.617544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.617575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.617766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.617798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.617972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.618003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.618108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.618140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.618338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.618370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.618491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.618523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.618697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.618729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.618845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.618876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.618984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.619015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.619147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.619179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.619440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.619472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.619588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.619619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.619800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.619837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.619974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.620006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.620194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.620234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.620412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.620444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.620618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.620649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.620754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.620785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.620891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.620922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.621046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.621078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.621212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.621244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.621422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.621454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.621569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.621600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.621770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.621803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.621916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.621948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.622136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.622168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.622423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.622457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.622641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.622673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.622849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.622880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.623062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.623094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.623309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.623341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.623464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.623496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.623624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.623655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.623759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.623789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.623904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.623936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.624051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.624082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.624194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.624249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.624357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.624388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.624628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.624660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.624767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.931 [2024-11-19 10:58:17.624805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.931 qpair failed and we were unable to recover it. 00:30:27.931 [2024-11-19 10:58:17.624930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.624962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.625145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.625177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.625322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.625355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.625526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.625558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.625678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.625710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.625824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.625856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.625962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.625994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.626172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.626212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.626345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.626377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.626498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.626529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.626654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.626686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.626867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.626899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.627030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.627062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.627184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.627225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.627461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.627493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.627670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.627701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.627830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.627861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.628035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.628066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.628233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.628267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.628382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.628414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.628521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.628553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.628661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.628692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.628800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.628832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.628961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.628991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.629100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.629131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.629250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.629285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.629430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.629467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.629642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.629674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.629794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.629825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.629954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.629986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.630174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.630212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.630389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.630421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.630542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.630573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.630697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.630729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.630901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.630932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.631109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.631141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.631283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.631316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.631499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.631530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.631719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.631751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.631948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.631980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.632135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.632218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.632421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.632456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.632696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.932 [2024-11-19 10:58:17.632728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.932 qpair failed and we were unable to recover it. 00:30:27.932 [2024-11-19 10:58:17.632904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.632936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.633105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.633137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.633426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.633459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.633572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.633603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.633709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.633741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.633927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.633957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.634135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.634167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.634368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.634400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.634534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.634565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.634773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.634806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.635069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.635108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.635326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.635359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.635544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.635575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.635699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.635730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.635991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.636023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.636149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.636181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.636317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.636349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.636637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.636669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.636782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.636814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.637003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.637034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.637218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.637251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.637385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.637417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.637545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.637576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.637709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.637741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.637853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.637885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.638064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.638095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.638354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.638387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.638496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.638527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.638644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.638677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.638876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.638907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.639006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.639038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.639248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.639281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.639428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.639460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.639656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.639688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.639881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.639912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.640159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.640191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.640331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.640363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.640554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.640590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.640776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.640807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.641051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.641082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.641253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.641287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.641419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.641451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.641625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.641656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.641761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.641793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.641996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.642027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.642151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.642183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.642327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.933 [2024-11-19 10:58:17.642360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.933 qpair failed and we were unable to recover it. 00:30:27.933 [2024-11-19 10:58:17.642530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.642561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.642807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.642839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.642944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.642976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.643091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.643122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.643262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.643295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.643420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.643452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.643711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.643743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.643868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.643899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.644161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.644193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.644391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.644422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.644542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.644574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.644702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.644734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.644942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.644974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.645158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.645189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.645329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.645361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.645464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.645496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.645613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.645645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.645825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.645861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.645980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.646012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.646117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.646148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.646278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.646311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.646487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.646519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.646705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.646737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.646844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.646875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.646979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.647010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.647113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.647145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.647258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.647291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.647478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.647509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.647710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.647743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.647937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.647967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.648095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.648127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.648327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.648361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.648495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.648527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.648716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.648748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.648923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.648956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.649091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.649122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.649260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.649293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.649580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.649612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.649800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.649833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.650014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.650046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.650178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.650219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.650470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.650502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.650638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.650670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.650856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.650887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.651064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.651102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.651310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.651344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.651468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.651498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.934 [2024-11-19 10:58:17.651718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.934 [2024-11-19 10:58:17.651750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.934 qpair failed and we were unable to recover it. 00:30:27.935 [2024-11-19 10:58:17.651932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.935 [2024-11-19 10:58:17.651964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.935 qpair failed and we were unable to recover it. 00:30:27.935 [2024-11-19 10:58:17.652145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.935 [2024-11-19 10:58:17.652176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.935 qpair failed and we were unable to recover it. 00:30:27.935 [2024-11-19 10:58:17.652313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.935 [2024-11-19 10:58:17.652345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.935 qpair failed and we were unable to recover it. 00:30:27.935 [2024-11-19 10:58:17.652521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.935 [2024-11-19 10:58:17.652553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.935 qpair failed and we were unable to recover it. 00:30:27.935 [2024-11-19 10:58:17.652750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.935 [2024-11-19 10:58:17.652781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.935 qpair failed and we were unable to recover it. 00:30:27.935 [2024-11-19 10:58:17.652909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.935 [2024-11-19 10:58:17.652942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.935 qpair failed and we were unable to recover it. 00:30:27.935 [2024-11-19 10:58:17.653069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.935 [2024-11-19 10:58:17.653100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.935 qpair failed and we were unable to recover it. 00:30:27.935 [2024-11-19 10:58:17.653235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.935 [2024-11-19 10:58:17.653268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.935 qpair failed and we were unable to recover it. 00:30:27.935 [2024-11-19 10:58:17.653384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.935 [2024-11-19 10:58:17.653415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.935 qpair failed and we were unable to recover it. 00:30:27.935 [2024-11-19 10:58:17.653610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.935 [2024-11-19 10:58:17.653642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.935 qpair failed and we were unable to recover it. 00:30:27.935 [2024-11-19 10:58:17.653843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.935 [2024-11-19 10:58:17.653874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:27.935 qpair failed and we were unable to recover it. 00:30:28.217 [2024-11-19 10:58:17.654002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.217 [2024-11-19 10:58:17.654034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.217 qpair failed and we were unable to recover it. 00:30:28.217 [2024-11-19 10:58:17.654147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.217 [2024-11-19 10:58:17.654178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.217 qpair failed and we were unable to recover it. 00:30:28.217 [2024-11-19 10:58:17.654365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.217 [2024-11-19 10:58:17.654397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.217 qpair failed and we were unable to recover it. 00:30:28.217 [2024-11-19 10:58:17.654522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.217 [2024-11-19 10:58:17.654553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.217 qpair failed and we were unable to recover it. 00:30:28.217 [2024-11-19 10:58:17.654665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.217 [2024-11-19 10:58:17.654698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.217 qpair failed and we were unable to recover it. 00:30:28.217 [2024-11-19 10:58:17.654899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.217 [2024-11-19 10:58:17.654930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.217 qpair failed and we were unable to recover it. 00:30:28.217 [2024-11-19 10:58:17.655055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.217 [2024-11-19 10:58:17.655087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.217 qpair failed and we were unable to recover it. 00:30:28.217 [2024-11-19 10:58:17.655216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.217 [2024-11-19 10:58:17.655248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.655492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.655522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.655652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.655683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.655800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.655833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.655955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.655986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.656162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.656200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.656381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.656412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.656529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.656561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.656675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.656706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.656894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.656926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.657172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.657212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.657404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.657436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.657549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.657580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.657758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.657790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.657912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.657943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.658067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.658098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.658213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.658245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.658416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.658448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.658555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.658586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.658859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.658892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.659022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.659053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.659181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.659221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.659395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.659426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.659528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.659560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.659669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.659700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.659820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.659851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.659985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.660017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.660183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.660227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.660400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.660431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.660677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.660709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.660908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.660939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.661049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.661082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.661225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.661257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.661447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.661480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.661653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.661684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.661874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.661905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.662008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.662039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.662225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.662259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.218 qpair failed and we were unable to recover it. 00:30:28.218 [2024-11-19 10:58:17.662375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.218 [2024-11-19 10:58:17.662406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.662521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.662553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.662671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.662702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.662817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.662848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.663112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.663143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.663257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.663290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.663420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.663451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.663566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.663598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.663766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.663835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.664033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.664070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.664182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.664225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.664350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.664384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.664578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.664611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.664800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.664832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.664940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.664972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.665155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.665188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.665317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.665351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.665538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.665570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.665752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.665784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.665892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.665924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.666102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.666134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.666258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.666302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.666558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.666590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.666778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.666811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.667056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.667089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.667266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.667299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.667478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.667510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.667686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.667718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.667851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.667883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.667990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.668022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.668160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.668191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.668382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.668415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.668660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.668693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.668810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.668842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.668964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.668996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.669139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.669171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.669292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.669326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.669444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.219 [2024-11-19 10:58:17.669476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.219 qpair failed and we were unable to recover it. 00:30:28.219 [2024-11-19 10:58:17.669663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.669695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.669801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.669833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.669958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.669990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.670160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.670192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.670319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.670352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.670525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.670558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.670800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.670832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.670952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.670983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.671098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.671130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.671253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.671287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.671552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.671589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.671711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.671744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.672030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.672063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.672275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.672307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.672437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.672468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.672641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.672674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.672856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.672887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.673030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.673062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.673180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.673225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.673353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.673384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.673488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.673520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.673651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.673683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.673807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.673837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.673955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.673988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.674197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.674241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.674413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.674445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.674703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.674734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.674845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.674877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.675020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.675051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.675170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.675222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.675409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.675442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.675555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.675586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.675763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.675795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.675920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.675952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.676087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.676117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.676304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.676337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.676538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.676571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.676701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.220 [2024-11-19 10:58:17.676738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.220 qpair failed and we were unable to recover it. 00:30:28.220 [2024-11-19 10:58:17.676949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.676981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.677120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.677153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.677406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.677438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.677558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.677590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.677799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.677830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.677962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.677995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.678172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.678210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.678334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.678365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.678558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.678590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.678792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.678824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.678950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.678981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.679106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.679138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.679333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.679366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.679513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.679545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.679750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.679781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.679996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.680028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.680159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.680190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.680386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.680419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.680599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.680630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.680748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.680782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.680972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.681003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.681193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.681238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.681427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.681461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.681582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.681614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.681742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.681775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.681916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.681948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.682132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.682169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.682361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.682431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.682739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.682776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.682967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.682999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.683241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.683275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.683421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.683454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.683741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.683773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.683964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.683995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.684129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.684160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.684387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.684420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.684682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.684715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.684928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.221 [2024-11-19 10:58:17.684960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.221 qpair failed and we were unable to recover it. 00:30:28.221 [2024-11-19 10:58:17.685220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.685252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.685437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.685468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.685739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.685771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.686014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.686044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.686229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.686263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.686398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.686429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.686623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.686654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.686858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.686890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.687098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.687130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.687297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.687330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.687510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.687542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.687676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.687708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.687833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.687864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.687985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.688016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.688256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.688289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.688343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23aaaf0 (9): Bad file descriptor 00:30:28.222 [2024-11-19 10:58:17.688617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.688673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.688902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.688936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.689193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.689240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.689453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.689485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.689622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.689654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.689870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.689901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.690112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.690144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.690313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.690346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.690530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.690562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.690688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.690720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.690937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.690969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.691088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.691119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.691362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.691397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.691545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.691577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.691760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.691791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.692030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.692062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.222 qpair failed and we were unable to recover it. 00:30:28.222 [2024-11-19 10:58:17.692309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.222 [2024-11-19 10:58:17.692341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.692466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.692498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.692625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.692658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.692846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.692879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.693006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.693038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.693181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.693222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.693339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.693372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.693547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.693578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.693817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.693849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.693978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.694010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.694144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.694181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.694319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.694351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.694488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.694520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.694693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.694725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.694863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.694895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.695014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.695046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.695223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.695256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.695373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.695403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.695546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.695578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.695753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.695785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.695906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.695937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.696173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.696215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.696343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.696375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.696497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.696529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.696741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.696773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.696891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.696923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.697118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.697149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.697287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.697320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.697503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.697535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.697717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.697749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.697937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.697968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.698151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.698182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.698383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.698415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.698560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.698591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.698862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.698893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.699023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.699055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.699294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.699326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.223 [2024-11-19 10:58:17.699458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.223 [2024-11-19 10:58:17.699489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.223 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.699678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.699711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.699907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.699938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.700145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.700176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.700337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.700369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.700610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.700642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.700769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.700800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.700935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.700967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.701228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.701261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.701434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.701465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.701701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.701733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.702039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.702072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.702262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.702295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.702478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.702515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.702700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.702732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.702995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.703026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.703274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.703306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.703613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.703646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.703896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.703928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.704129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.704161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.704383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.704415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.704564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.704596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.704780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.704812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.705005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.705037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.705279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.705312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.705422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.705453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.705573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.705605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.705804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.705837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.706058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.706090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.706224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.706256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.706447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.706479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.706670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.706701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.706921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.706953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.707120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.707153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.707352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.707385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.707602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.707634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.707763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.707796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.707983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.224 [2024-11-19 10:58:17.708016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.224 qpair failed and we were unable to recover it. 00:30:28.224 [2024-11-19 10:58:17.708293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.708325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.708500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.708532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.708684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.708716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.708851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.708882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.709087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.709118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.709300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.709333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.709539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.709572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.709877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.709909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.710160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.710192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.710402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.710434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.710570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.710602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.710782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.710813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.711012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.711044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.711286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.711318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.711559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.711591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.711771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.711809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.712047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.712079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.712253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.712285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.712472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.712504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.712629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.712660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.712867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.712900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.713160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.713193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.713358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.713390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.713563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.713595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.713729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.713760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.714011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.714043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.714175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.714214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.714355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.714387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.714516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.714548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.714746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.714778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.715037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.715069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.715332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.715365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.715511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.715542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.715783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.715815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.716080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.716112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.716398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.716431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.716568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.716600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.716733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.225 [2024-11-19 10:58:17.716765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.225 qpair failed and we were unable to recover it. 00:30:28.225 [2024-11-19 10:58:17.716968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.717000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.717225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.717257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.717447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.717479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.717651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.717682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.717915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.717961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.718106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.718140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.718341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.718376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.718495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.718528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.718640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.718672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.718876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.718909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.719094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.719126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.719256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.719290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.719413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.719444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.719587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.719620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.719795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.719826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.720087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.720119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.720360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.720393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.720514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.720547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.720692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.720724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.720949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.720980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.721150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.721182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.721451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.721483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.721606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.721637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.721744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.721777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.721985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.722016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.722258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.722292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.722527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.722558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.722701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.722733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.723002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.723033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.723239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.723274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.723446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.723477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.723724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.723761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.723968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.724001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.724287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.724319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.724510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.724543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.226 qpair failed and we were unable to recover it. 00:30:28.226 [2024-11-19 10:58:17.724752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.226 [2024-11-19 10:58:17.724784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.725096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.725127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.725303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.725338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.725536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.725573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.725716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.725749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.725957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.725989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.726111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.726143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.726409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.726448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.726627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.726664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.726926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.726959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.727219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.727255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.727418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.727453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.727666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.727698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.727997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.728028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.728219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.728253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.728426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.728458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.728603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.728634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.728921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.728954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.729192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.729233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.729418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.729451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.729710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.729742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.729929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.729961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.730088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.730120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.730322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.730363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.730497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.730529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.730667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.730698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.730939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.730972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.731099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.731131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.731253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.731287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.731475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.731507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.731634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.731666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.731784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.731816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.731996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.732029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.732216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.732250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.732368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.732400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.732595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.227 [2024-11-19 10:58:17.732627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.227 qpair failed and we were unable to recover it. 00:30:28.227 [2024-11-19 10:58:17.732807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.732846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.733043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.733076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.733248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.733282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.733500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.733531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.733661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.733694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.733878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.733909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.734019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.734052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.734164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.734195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.734331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.734364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.734484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.734515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.734759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.734791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.734910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.734943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.735130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.735162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.735340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.735374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.735504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.735536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.735723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.735755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.735891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.735923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.736100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.736131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.736256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.736289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.736478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.736510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.736625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.736658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.736833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.736865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.737056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.737089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.737232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.737268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.737400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.737431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.737551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.737583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.737701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.737732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.737975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.738046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.738265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.738302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.738431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.738463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.738592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.738623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.738813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.738844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.739072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.739104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.739234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.739267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.739518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.739551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.739723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.739753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.739995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.740026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.228 [2024-11-19 10:58:17.740228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.228 [2024-11-19 10:58:17.740263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.228 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.740393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.740424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.740601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.740634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.740812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.740854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.741109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.741140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.741328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.741362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.741561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.741592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.741719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.741750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.741875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.741906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.742085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.742115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.742229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.742261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.742453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.742485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.742678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.742709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.742896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.742928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.743042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.743072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.743184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.743224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.743344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.743375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.743554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.743585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.743707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.743737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.743915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.743948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.744065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.744094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.744314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.744345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.744519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.744551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.744694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.744727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.744831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.744862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.744994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.745025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.745142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.745172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.745376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.745408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.745577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.745607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.745718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.745749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.745946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.745979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.746152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.746183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.746366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.746397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.746514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.746545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.746665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.746695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.746956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.746987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.747097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.747127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.747264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.747296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.229 qpair failed and we were unable to recover it. 00:30:28.229 [2024-11-19 10:58:17.747398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.229 [2024-11-19 10:58:17.747427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.747541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.747571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.747744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.747775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.748045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.748076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.748341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.748374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.748496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.748532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.748749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.748780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.748955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.748984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.749093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.749125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.749254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.749286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.749414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.749444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.749573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.749605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.749783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.749813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.750010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.750040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.750151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.750180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.750304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.750337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.750528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.750559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.750739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.750768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.750898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.750929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.751111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.751143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.751319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.751351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.751526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.751556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.751672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.751703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.751881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.751912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.752082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.752112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.752281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.752312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.752497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.752528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.752703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.752734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.752832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.752862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.752966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.752995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.753111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.753141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.753264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.753297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.753520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.753592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.753795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.753832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.753962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.753995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.754100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.754131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.754254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.754288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.754408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.230 [2024-11-19 10:58:17.754440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.230 qpair failed and we were unable to recover it. 00:30:28.230 [2024-11-19 10:58:17.754611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.754643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.754751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.754782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.754913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.754945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.755134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.755166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.755312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.755345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.755472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.755504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.755632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.755664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.755792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.755823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.755961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.755994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.756166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.756197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.756338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.756370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.756475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.756508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.756769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.756801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.756922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.756955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.757069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.757101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.757220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.757252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.757481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.757514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.757676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.757708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.757842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.757874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.758097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.758129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.758240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.758273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.758415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.758452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.758569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.758601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.758703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.758734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.758919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.758951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.759082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.759114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.759219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.759252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.759434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.759465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.759587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.759619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.759821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.759853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.760091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.760123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.760315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.760348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.760459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.760491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.760663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-11-19 10:58:17.760695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.231 qpair failed and we were unable to recover it. 00:30:28.231 [2024-11-19 10:58:17.760932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.760964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.761089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.761121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.761248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.761281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.761402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.761434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.761534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.761567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.761739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.761771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.761956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.761987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.762170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.762213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.762344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.762376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.762501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.762533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.762647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.762678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.762853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.762885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.763137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.763169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.763287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.763319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.763450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.763493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.763611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.763642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.763769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.763799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.764036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.764067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.764170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.764210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.764334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.764365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.764539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.764571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.764677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.764710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.764845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.764876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.765117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.765148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.765281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.765314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.765504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.765536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.765667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.765699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.765818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.765850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.766030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.766062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.766302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.766335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.766462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.766495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.766640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.766670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.766947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.766980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.767103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.767134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.767364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.767397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.767585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.767617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.767791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.767822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.768006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-11-19 10:58:17.768038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.232 qpair failed and we were unable to recover it. 00:30:28.232 [2024-11-19 10:58:17.768228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.768261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.768453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.768484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.768610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.768642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.768857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.768894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.769084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.769117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.769241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.769273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.769444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.769476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.769589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.769621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.769791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.769823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.770016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.770048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.770189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.770232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.770504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.770537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.770775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.770807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.771065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.771096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.771221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.771255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.771387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.771419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.771631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.771662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.771932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.771966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.772253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.772287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.772473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.772505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.772694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.772726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.772916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.772947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.773078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.773110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.773244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.773278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.773537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.773568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.773713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.773746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.774000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.774032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.774163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.774195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.774418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.774449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.774640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.774672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.774889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.774921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.775223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.775256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.775449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.775481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.775606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.775638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.775812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.775843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.775988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.776020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.776227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.776261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.776477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-11-19 10:58:17.776508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.233 qpair failed and we were unable to recover it. 00:30:28.233 [2024-11-19 10:58:17.776645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.776677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.776792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.776823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.777101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.777133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.777281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.777314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.777496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.777527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.777707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.777740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.777971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.778008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.778266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.778300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.778414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.778446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.778589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.778621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.778855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.778886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.779074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.779107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.779278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.779311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.779460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.779493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.779730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.779762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.779959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.779991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.780233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.780266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.780448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.780479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.780622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.780654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.780989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.781020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.781287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.781321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.781514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.781545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.781677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.781710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.781910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.781942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.782212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.782244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.782432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.782465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.782655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.782687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.782976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.783008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.783180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.783220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.783354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.783386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.783502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.783533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.783723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.783755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.783932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.783963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.784225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.784264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.784406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.784437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.784629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.784661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.784799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.784830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.785036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.785069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.234 qpair failed and we were unable to recover it. 00:30:28.234 [2024-11-19 10:58:17.785171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.234 [2024-11-19 10:58:17.785212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.785414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.785446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.785590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.785621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.785872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.785904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.786103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.786135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.786301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.786334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.786477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.786507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.786710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.786743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.786949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.786980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.787194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.787247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.787490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.787522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.787715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.787745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.787870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.787902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.788116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.788147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.788281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.788313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.788514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.788546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.788738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.788771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.788981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.789013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.789182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.789222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.789354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.789386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.789581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.789612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.789751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.789782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.789954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.789992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.790234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.790267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.790448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.790479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.790718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.790751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.790995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.791028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.791294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.791328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.791479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.791510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.791651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.791683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.791911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.791943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.792138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.792170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.792436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.792469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.792742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.792774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.793043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.793075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.793353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.235 [2024-11-19 10:58:17.793386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.235 qpair failed and we were unable to recover it. 00:30:28.235 [2024-11-19 10:58:17.793582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.793614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.793876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.793909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.794086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.794117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.794250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.794283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.794508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.794541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.794735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.794768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.794976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.795011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.795236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.795269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.795410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.795442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.795581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.795613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.795761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.795793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.796039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.796070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.796373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.796407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.796610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.796643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.796891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.796924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.797208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.797240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.797371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.797403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.797548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.797579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.797706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.797738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.797889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.797919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.798160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.798192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.798395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.798427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.798624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.798657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.798933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.798965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.799097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.799129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.799350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.799384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.799526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.799558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.799796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.799867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.236 qpair failed and we were unable to recover it. 00:30:28.236 [2024-11-19 10:58:17.800077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.236 [2024-11-19 10:58:17.800113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.800373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.800409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.800604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.800636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.800922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.800954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.801085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.801116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.801354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.801387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.801572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.801604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.801796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.801828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.802065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.802096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.802301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.802335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.802518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.802550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.802751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.802782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.803047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.803089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.803276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.803308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.803483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.803514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.803638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.803668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.803926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.803958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.804149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.804180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.804466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.804498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.804767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.804800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.805090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.805121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.805333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.805366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.805546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.805578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.805845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.805878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.806118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.806151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.806430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.806463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.806689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.806721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.806964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.806997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.807241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.807274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.807419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.807450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.807637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.807669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.237 qpair failed and we were unable to recover it. 00:30:28.237 [2024-11-19 10:58:17.807974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.237 [2024-11-19 10:58:17.808005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.808198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.808242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.808383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.808415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.808627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.808658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.808970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.809001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.809238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.809271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.809466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.809497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.809744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.809775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.809972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.810005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.810279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.810312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.810498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.810531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.810674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.810707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.810916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.810947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.811237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.811272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.811531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.811563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.811790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.811825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.811999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.812030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.812329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.812364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.812552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.812584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.812725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.812755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.812967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.812997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.813274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.813314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.813460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.813490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.813781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.813816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.814078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.814110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.814382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.814416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.814561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.814593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.814788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.814818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.815054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.815085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.815357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.815391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.815591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.815623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.815814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.815844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.816133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.816165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.816369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.816402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.816597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.816628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.816787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.816818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.817094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.238 [2024-11-19 10:58:17.817126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.238 qpair failed and we were unable to recover it. 00:30:28.238 [2024-11-19 10:58:17.817382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.817416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.817608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.817639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.817824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.817855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.817974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.818005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.818231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.818263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.818397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.818429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.818555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.818588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.818731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.818762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.818946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.818976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.819237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.819271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.819517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.819548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.819691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.819728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.820013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.820046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.820237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.820270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.820408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.820440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.820631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.820664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.820874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.820906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.821145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.821177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.821398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.821430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.821541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.821572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.821694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.821726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.821915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.821946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.822190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.822231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.822482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.822514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.822763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.822794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.823040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.823073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4091053 Killed "${NVMF_APP[@]}" "$@" 00:30:28.239 [2024-11-19 10:58:17.823324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.823357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 [2024-11-19 10:58:17.823500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.823531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:28.239 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:28.239 [2024-11-19 10:58:17.823800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.239 [2024-11-19 10:58:17.823832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.239 qpair failed and we were unable to recover it. 00:30:28.239 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:28.239 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.239 [2024-11-19 10:58:17.824094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.824125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.240 [2024-11-19 10:58:17.824411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.824444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.824684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.824716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.824975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.825007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.825210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.825243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.825386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.825418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.825612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.825651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.825932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.825964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.826151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.826182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.826338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.826371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.826566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.826597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.826813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.826845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.826980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.827011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.827297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.827332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.827600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.827632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.827920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.827953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.828134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.828167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.828389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.828422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.828624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.828656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.828859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.828889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.829136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.829165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.829416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.829449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.829668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.829699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.829832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.829864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.830040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.830071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.830377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.830410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.830651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.830683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4091793 00:30:28.240 [2024-11-19 10:58:17.830944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.830976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4091793 00:30:28.240 [2024-11-19 10:58:17.831196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:28.240 [2024-11-19 10:58:17.831238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.831361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.831392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4091793 ']' 00:30:28.240 [2024-11-19 10:58:17.831633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.831667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 [2024-11-19 10:58:17.831851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.831891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.240 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.240 [2024-11-19 10:58:17.832012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.240 [2024-11-19 10:58:17.832045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.240 qpair failed and we were unable to recover it. 00:30:28.241 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.241 [2024-11-19 10:58:17.832276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.832312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.832506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.832548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.241 [2024-11-19 10:58:17.832742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.832774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.241 [2024-11-19 10:58:17.832985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.833019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 10:58:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.241 [2024-11-19 10:58:17.833237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.833274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.833518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.833551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.833746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.833777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.834013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.834045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.834225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.834263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.834413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.834446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.834639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.834670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.834904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.834936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.835178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.835219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.835416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.835452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.835717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.835749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.836054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.836090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.836296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.836329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.836573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.836606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.836747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.836780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.836899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.836931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.837221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.837257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.837482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.837517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.837792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.837823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.838020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.838052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.241 qpair failed and we were unable to recover it. 00:30:28.241 [2024-11-19 10:58:17.838279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.241 [2024-11-19 10:58:17.838313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.838524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.838556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.838746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.838778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.839045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.839076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.839259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.839292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.839486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.839517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.839696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.839727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.839984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.840019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.840191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.840232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.840439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.840471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.840617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.840651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.840922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.840955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.841092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.841125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.841401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.841434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.841627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.841658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.841850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.841882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.842088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.842119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.842381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.842416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.842605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.842637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.842789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.842821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.843037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.843068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.843379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.843412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.843613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.843646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.843904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.843940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.844182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.844224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.844370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.844405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.844677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.844708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.845021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.845053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.845346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.845379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.845526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.845557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.845688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.845720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.845921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.845952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.846136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.846167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.846371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.846403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.846544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.846575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.846770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.846802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.847019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.242 [2024-11-19 10:58:17.847052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.242 qpair failed and we were unable to recover it. 00:30:28.242 [2024-11-19 10:58:17.847241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.847274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.847485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.847519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.847789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.847823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.848101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.848132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.848372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.848405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.848652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.848684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.848914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.848946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.849194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.849246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.849490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.849523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.849792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.849823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.850117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.850150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.850297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.850329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.850576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.850613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.850913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.850945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.851186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.851235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.851414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.851445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.851639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.851672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.851894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.851926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.852167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.852199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.852471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.852503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.852631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.852662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.852907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.852938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.853180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.853243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.853374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.853405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.853535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.853568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.853858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.853891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.854012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.854044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.854246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.854280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.854429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.854461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.854642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.854673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.854849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.854880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.855152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.855184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.855468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.855499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.855623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.855655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.855922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.855953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.856075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.856107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.856251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.243 [2024-11-19 10:58:17.856284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.243 qpair failed and we were unable to recover it. 00:30:28.243 [2024-11-19 10:58:17.856411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.856442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.856710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.856742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.856943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.856975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.857253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.857286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.857544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.857577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.857884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.857915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.858186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.858229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.858497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.858529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.858656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.858688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.858813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.858844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.858969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.859001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.859110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.859141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.859331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.859365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.859539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.859570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.859768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.859801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.859985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.860017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.860141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.860172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.860431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.860514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.860741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.860779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.860982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.861016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.861196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.861299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.861551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.861585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.861784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.861816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.862024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.862057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.862288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.862322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.862520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.862552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.862680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.862713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.862982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.863014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.863259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.863294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.863483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.863515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.863762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.863795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.863950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.863982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.864173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.864215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.864349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.864381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.864564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.864596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.864793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.864826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.865025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.865058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.865332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.244 [2024-11-19 10:58:17.865365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.244 qpair failed and we were unable to recover it. 00:30:28.244 [2024-11-19 10:58:17.865657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.865690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.865897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.865930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.866036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.866070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.866185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.866228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.866371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.866422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.866560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.866592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.866804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.866848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.867022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.867055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.867173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.867215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.867474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.867507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.867635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.867668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.867793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.867825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.868069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.868100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.868397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.868430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.868642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.868674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.869022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.869056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.869189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.869233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.869437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.869470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.869588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.869620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.869874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.869906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.870159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.870191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.870474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.870507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.870694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.870726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.870913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.870945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.871193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.871235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.871362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.871395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.871518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.871549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.871735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.871768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.872055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.872086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.872283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.872317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.872539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.872572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.872698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.872730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.872865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.872896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.873144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.873183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.873454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.873487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.873627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.873658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.873841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.873873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.874046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.245 [2024-11-19 10:58:17.874079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.245 qpair failed and we were unable to recover it. 00:30:28.245 [2024-11-19 10:58:17.874222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.874254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.874431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.874464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.874706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.874738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.874867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.874900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.875165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.875197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.875405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.875438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.875646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.875677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.875799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.875831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.876056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.876087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.876302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.876336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.876579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.876610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.876804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.876836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.876942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.876972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.877148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.877180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.877406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.877438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.877562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.877594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.877788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.877819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.878073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.878105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.878229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.878266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.878405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.878437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.878544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.878576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.878844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.878876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.879069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.879104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.879317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.879350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.879467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.879498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.879616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.879648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.879892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.879923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.880163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.880196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.880394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.880426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.880625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.880656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.880873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.880904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.881085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.881118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.246 qpair failed and we were unable to recover it. 00:30:28.246 [2024-11-19 10:58:17.881239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.246 [2024-11-19 10:58:17.881273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.881466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.881499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.881578] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:30:28.247 [2024-11-19 10:58:17.881630] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.247 [2024-11-19 10:58:17.881684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.881731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.881983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.882013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.882214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.882246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.882434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.882464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.882651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.882684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.882933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.882966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.883271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.883307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.883557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.883589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.883719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.883754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.883944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.883978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.884173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.884216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.884464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.884498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.884633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.884668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.884919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.884952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.885171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.885232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.885522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.885555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.885731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.885764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.885940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.885973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.886103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.886138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.886419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.886452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.886751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.886784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.886960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.886992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.887110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.887142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.887276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.887308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.887505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.887537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.887723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.887753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.887886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.887918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.888055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.888087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.888271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.888305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.888562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.888593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.888721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.888753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.888875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.888906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.889047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.889080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.889273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.889306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.889435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.889467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.247 qpair failed and we were unable to recover it. 00:30:28.247 [2024-11-19 10:58:17.889651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.247 [2024-11-19 10:58:17.889683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.889872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.889904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.890151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.890183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.890385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.890418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.890665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.890697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.890897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.890929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.891144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.891175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.891370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.891403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.891590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.891622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.891813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.891845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.892106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.892139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.892337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.892370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.892506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.892538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.892741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.892772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.892969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.893001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.893291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.893325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.893517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.893549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.893736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.893768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.893952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.893984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.894124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.894157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.894362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.894401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.894582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.894614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.894729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.894762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.895023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.895055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.895248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.895282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.895468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.895499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.895622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.895654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.895910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.895942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.896061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.896094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.896273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.896307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.896479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.896511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.896788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.896820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.896954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.896987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.897162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.897195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.897370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.897403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.897552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.897585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.897770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.897802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.897906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.897938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.248 qpair failed and we were unable to recover it. 00:30:28.248 [2024-11-19 10:58:17.898226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.248 [2024-11-19 10:58:17.898260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.898376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.898409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.898628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.898659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.898856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.898889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.899035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.899067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.899266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.899300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.899494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.899526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.899641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.899673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.899845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.899876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.900120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.900158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.900409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.900442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.900625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.900658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.900867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.900899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.901038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.901071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.901319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.901352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.901601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.901633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.901765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.901797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.901992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.902023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.902257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.902292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.902469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.902501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.902639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.902672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.902912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.902945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.903128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.903160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.903394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.903427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.903695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.903727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.903909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.903939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.904069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.904102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.904317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.904351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.904616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.904647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.904865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.904899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.905071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.905102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.905247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.905281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.905457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.905488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.905675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.905706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.905824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.905864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.906103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.906136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.906346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.906402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.906598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.906630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.249 qpair failed and we were unable to recover it. 00:30:28.249 [2024-11-19 10:58:17.906802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.249 [2024-11-19 10:58:17.906834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.906985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.907018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.907140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.907172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.907346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.907418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.907636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.907672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.907862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.907895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.908086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.908119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.908234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.908269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.908462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.908494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.908685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.908716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.908909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.908941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.909130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.909164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.909384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.909419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.909612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.909652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.909858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.909899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.910038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.910070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.910282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.910317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.910631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.910663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.910903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.910936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.911198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.911240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.911416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.911449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.911722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.911754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.912001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.912034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.912232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.912266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.912470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.912502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.912625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.912664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.912785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.912818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.912935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.912967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.913156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.913188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.913305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.913337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.913522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.913554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.913668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.913701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.913881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.913913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.250 qpair failed and we were unable to recover it. 00:30:28.250 [2024-11-19 10:58:17.914102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.250 [2024-11-19 10:58:17.914134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.914398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.914432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.914620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.914652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.914774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.914805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.915091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.915123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.915303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.915334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.915519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.915551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.915723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.915753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.915938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.915970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.916144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.916176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.916328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.916378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.916551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.916583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.916763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.916793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.917031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.917063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.917307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.917340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.917451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.917483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.917624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.917655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.917858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.917888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.918076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.918107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.918304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.918337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.918600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.918631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.918800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.918833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.919019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.919051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.919222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.919255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.919503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.919535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.919725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.919758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.919931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.919962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.920091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.920124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.920300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.920333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.920549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.920581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.920780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.920812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.921006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.921038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.921179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.921227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.921522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.921553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.921813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.921844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.921973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.251 [2024-11-19 10:58:17.922004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.251 qpair failed and we were unable to recover it. 00:30:28.251 [2024-11-19 10:58:17.922136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.922166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.922388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.922422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.922610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.922641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.922815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.922847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.922983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.923014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.923217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.923251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.923459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.923490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.923598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.923629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.923749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.923780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.923991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.924023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.924142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.924173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.924373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.924406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.924666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.924697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.924888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.924920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.925091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.925122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.925293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.925327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.925441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.925473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.925666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.925699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.925929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.925960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.926222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.926255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.926448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.926479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.926590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.926621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.926803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.926835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.927128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.927162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.927385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.927418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.927534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.927566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.927733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.927765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.928014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.928045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.928293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.928327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.928518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.928549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.928680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.928710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.928908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.928940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.252 [2024-11-19 10:58:17.929062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.252 [2024-11-19 10:58:17.929094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.252 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.929272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.929305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.929501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.929532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.929722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.929752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.929891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.929928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.930166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.930198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.930324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.930356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.930596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.930628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.930803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.930833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.931024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.931055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.931185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.931222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.931404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.931437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.931623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.931654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.931896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.931928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.932036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.932067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.932241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.932273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.932405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.932435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.932643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.932675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.932873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.932904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.933075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.933107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.933294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.933327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.933536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.933568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.933771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.933802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.933967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.933998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.934180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.934220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.934432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.934463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.934639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.934671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.934804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.934834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.934951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.934982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.935090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.935122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.935240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.935272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.935491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.935522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.935707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.935738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.935913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.935945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.936115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.253 [2024-11-19 10:58:17.936146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.253 qpair failed and we were unable to recover it. 00:30:28.253 [2024-11-19 10:58:17.936286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.936317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.936507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.936539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.936763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.936794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.936988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.937019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.937260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.937293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.937481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.937512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.937765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.937797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.937930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.937960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.938136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.938166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.938286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.938322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.938533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.938565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.938803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.938834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.939015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.939047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.939285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.939319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.939558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.939589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.939694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.939725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.939962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.939992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.940257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.940289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.940463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.940494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.940733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.940765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.940875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.940906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.941114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.941147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.941272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.941304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.941486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.941517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.941793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.941823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.941953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.941984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.942193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.942327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.942531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.942563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.942702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.942734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.942973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.943005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.943275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.943308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.943563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.943594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.943835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.943866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.254 [2024-11-19 10:58:17.944042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.254 [2024-11-19 10:58:17.944074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.254 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.944247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.944280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.944456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.944487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.944678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.944711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.944919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.944950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.945249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.945282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.945488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.945521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.945699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.945731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.945945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.945975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.946224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.946256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.946454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.946484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.946669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.946701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.946885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.946915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.947102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.947134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.947380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.947412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.947621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.947652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.947774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.947811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.947937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.947968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.948179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.948220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.948404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.948435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.948623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.948654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.948784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.948815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.949012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.949042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.949241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.949274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.949397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.949429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.949591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.949622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.949793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.949822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.949953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.949985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.950225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.950258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.950440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.950471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.950683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.255 [2024-11-19 10:58:17.950715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.255 qpair failed and we were unable to recover it. 00:30:28.255 [2024-11-19 10:58:17.950848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.950879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.951149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.951181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.951384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.951416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.951588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.951619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.951809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.951838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.952025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.952057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.952255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.952288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.952551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.952583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.952775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.952806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.952986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.953015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.953227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.953259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.953499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.953529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.953715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.953747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.953946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.953977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.954220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.954252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.954448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.954481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.954658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.954690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.954882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.954913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.955108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.955140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.955277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.955308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.955571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.955602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.955900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.955932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.956117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.956148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.956424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.956461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.956730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.956762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.956948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.956984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.957175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.957216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.957480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.957511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.957759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.957792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.957998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.958029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.958222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.958254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.958442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.256 [2024-11-19 10:58:17.958474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.256 qpair failed and we were unable to recover it. 00:30:28.256 [2024-11-19 10:58:17.958654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.958684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.958872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.958903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.959118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.959149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.959339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.959372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.959504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.959536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.959668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.959699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.959820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.959850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.960055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.960088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.960225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.960257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.960365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.960396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.960564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.960596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.960707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.960738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.960860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.960890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.961011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.961052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.961230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.961262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.961476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.961507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.961646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.961678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.961789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.961822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.961957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.961988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.962115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.962147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.962348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.962382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.962628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.962660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.962779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.962810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.963076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.963108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.963293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.963325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.963514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.963546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.963655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.963688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.963865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.963896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.964086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.964117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.964288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.964321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.964533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.964566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.257 [2024-11-19 10:58:17.964831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.257 [2024-11-19 10:58:17.964863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.257 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.965000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.965032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.965270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.965309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.965572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.965605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.965734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.965765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.965880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.965911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.966098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.966129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.966239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.966273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.966398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.966429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.966613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.966650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.966888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.966920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.967189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.967228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.967477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.967508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.967683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.967715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.967771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:28.258 [2024-11-19 10:58:17.967959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.967991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.968250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.968289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.968476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.968509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.968701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.968735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.968922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.968952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.969137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.969169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.969460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.969494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.969667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.969699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.969881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.969912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.970085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.970116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.970236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.970269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.970508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.970539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.970749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.970780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.971019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.971050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.971341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.971374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.971582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.971614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.971801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.971834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.972044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.972075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.972336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.972370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.972494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.972526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.972654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.972685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.972871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.972903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.973073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.973105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.973288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.258 [2024-11-19 10:58:17.973323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.258 qpair failed and we were unable to recover it. 00:30:28.258 [2024-11-19 10:58:17.973557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.973589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.973707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.973738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.973871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.973902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.974088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.974120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.974339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.974412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.974712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.974784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.975137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.975216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.975377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.975413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.975630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.975663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.975789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.975821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.976005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.976038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.976302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.976336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.976527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.976559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.976806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.976839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.977128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.977160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.977294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.977328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.977441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.977474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.977747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.977788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.978004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.978037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.978213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.978248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.978428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.978460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.978646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.978686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.978833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.978873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.979221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.979262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.979456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.979489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.979676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.979708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.979824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.979856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.980058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.980091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.980342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.980376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.980507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.980539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.980785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.980817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.981016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.981049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.981293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.981326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.981582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.981613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.981743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.981775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.981962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.981993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.982125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.982156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.982304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.982339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.982581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.982614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.259 qpair failed and we were unable to recover it. 00:30:28.259 [2024-11-19 10:58:17.982797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.259 [2024-11-19 10:58:17.982829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.260 qpair failed and we were unable to recover it. 00:30:28.260 [2024-11-19 10:58:17.983035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.260 [2024-11-19 10:58:17.983067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.260 qpair failed and we were unable to recover it. 00:30:28.260 [2024-11-19 10:58:17.983237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.260 [2024-11-19 10:58:17.983271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.260 qpair failed and we were unable to recover it. 00:30:28.260 [2024-11-19 10:58:17.983447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.260 [2024-11-19 10:58:17.983485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.260 qpair failed and we were unable to recover it. 00:30:28.260 [2024-11-19 10:58:17.983617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.260 [2024-11-19 10:58:17.983649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.260 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.983819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.983875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.984082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.984126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.984322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.984358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.984535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.984568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.984843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.984876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.985140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.985174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.985319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.985356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.985470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.985512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.985638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.985670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.985854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.985886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.986021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.986054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.986229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.986262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.986445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.986477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.986723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.986756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.987052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.535 [2024-11-19 10:58:17.987084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.535 qpair failed and we were unable to recover it. 00:30:28.535 [2024-11-19 10:58:17.987200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.987244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.987485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.987517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.987699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.987732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.987914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.987946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.988072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.988104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.988317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.988351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.988535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.988567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.988764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.988797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.988970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.989002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.989183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.989227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.989473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.989506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.989614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.989646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.989894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.989927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.990118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.990150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.990417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.990451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.990765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.990797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.991047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.991080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.991249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.991282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.991523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.991555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.991686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.991718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.991915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.991947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.992117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.992150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.992398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.992431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.992670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.992702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.992836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.992868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.993001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.993052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.993339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.993373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.993495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.993527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.993710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.993742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.993927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.993959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.994140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.994172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.994363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.994400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.994519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.994551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.994737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.994769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.536 [2024-11-19 10:58:17.994948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.536 [2024-11-19 10:58:17.994980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.536 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.995157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.995187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.995368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.995402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.995522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.995554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.995764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.995795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.995937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.995970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.996231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.996265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.996472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.996504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.996618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.996650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.996797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.996829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.997087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.997120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.997248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.997281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.997495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.997528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.997648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.997685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.997868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.997899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.998076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.998109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.998325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.998359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.998545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.998579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.998772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.998804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.998982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.999014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.999192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.999234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.999405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.999437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.999654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.999686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:17.999853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:17.999886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.000160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.000193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.000425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.000458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.000577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.000610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.000811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.000843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.001088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.001120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.001300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.001336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.001460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.001492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.001744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.001782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.001908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.001940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.002064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.002097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.002286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.002319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.002432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.002464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.002710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.537 [2024-11-19 10:58:18.002743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.537 qpair failed and we were unable to recover it. 00:30:28.537 [2024-11-19 10:58:18.002933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.002965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.003141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.003173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.003370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.003408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.003581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.003613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.003853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.003885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.004069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.004102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.004279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.004312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.004505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.004538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.004783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.004815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.004940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.004972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.005248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.005282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.005478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.005510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.005684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.005717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.005890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.005922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.006177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.006217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.006408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.006441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.006630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.006664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.006778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.006810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.006932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.006963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.007148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.007181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.007460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.007492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.007623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.007656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.007897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.007930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.008114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.008146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.008398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.008433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.008550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.008583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.008755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.008788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.009027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.009061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.009253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.009288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.009422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.009454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.009664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.009697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 qpair failed and we were unable to recover it. 00:30:28.538 [2024-11-19 10:58:18.009826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.538 [2024-11-19 10:58:18.009838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.538 [2024-11-19 10:58:18.009862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.538 [2024-11-19 10:58:18.009859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.538 [2024-11-19 10:58:18.009870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.539 [2024-11-19 10:58:18.009877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.539 [2024-11-19 10:58:18.009882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.009984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.010020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.010209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.010243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.010453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.010486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.010669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.010702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.010885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.010917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.011094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.011127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.011316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.011349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.011487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.011519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.011539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:28.539 [2024-11-19 10:58:18.011646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:28.539 [2024-11-19 10:58:18.011724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.011752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:28.539 [2024-11-19 10:58:18.011767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.011753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:28.539 [2024-11-19 10:58:18.011903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.011934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.012219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.012253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.012392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.012424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.012618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.012656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.012777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.012817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.013059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.013090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.013286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.013321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.013440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.013473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.013652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.013685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.013930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.013962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.014139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.014181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.014389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.014433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.014701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.014732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.014921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.014953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.015220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.015254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.015443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.015476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.015660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.015693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.015827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.015861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.016059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.016090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.016220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.016254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.016377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.016409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.016651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.016684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.016934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.016966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.017120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.017152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.017456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.017491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.539 [2024-11-19 10:58:18.017758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.539 [2024-11-19 10:58:18.017791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.539 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.018084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.018117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.018426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.018461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.018597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.018630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.018895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.018928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.019129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.019168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.019477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.019533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.019679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.019720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.019992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.020026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.020157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.020190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.020446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.020479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.020662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.020694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.020961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.020993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.021168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.021214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.021353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.021386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.021648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.021681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.021878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.021911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.022175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.022218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.022461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.022495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.022690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.022722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.022971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.023004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.023250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.023285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.023430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.023463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.023658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.023691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.023886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.023918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.024109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.024142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.024277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.024310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.024618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.024650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.024899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.024931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.025119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.025152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.025363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.025397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.540 [2024-11-19 10:58:18.025583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.540 [2024-11-19 10:58:18.025616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.540 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.025811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.025856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.025990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.026022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.026228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.026261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.026453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.026486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.026675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.026707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.026937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.026970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.027142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.027174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.027471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.027508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.027776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.027808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.028089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.028122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.028404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.028437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.028641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.028673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.028855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.028887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.029150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.029183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.029444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.029477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.029687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.029719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.029956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.029989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.030227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.030260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.030431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.030463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.030704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.030736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.031000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.031032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.031275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.031308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.031547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.031580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.031785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.031818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.032006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.032038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.032167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.032200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.032419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.032452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.032700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.032734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.032921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.032955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.033140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.033173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.033425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.033468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.033658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.033689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.033822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.033854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.033988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.034021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.034136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.034168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.034462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.034497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.034711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.034747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.541 qpair failed and we were unable to recover it. 00:30:28.541 [2024-11-19 10:58:18.034943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.541 [2024-11-19 10:58:18.034977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.035245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.035280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.035489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.035523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.035784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.035826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.036088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.036122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.036412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.036448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.036712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.036747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.036954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.036987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.037163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.037197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.037344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.037377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.037572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.037606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.037849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.037884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.038017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.038050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.038319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.038354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.038542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.038576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.038836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.038870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.039118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.039153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.039456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.039492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.039738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.039771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.040063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.040098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.040303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.040338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.040577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.040609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.040893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.040927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.041198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.041240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.041508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.041542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.041786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.041819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.042022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.042054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.042298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.042333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.042544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.042577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.042839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.042872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.043190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.043258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.043410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.043443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.043633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.043666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.043901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.043932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.044148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.044181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.044443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.044477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.044720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.044751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.045042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.542 [2024-11-19 10:58:18.045074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.542 qpair failed and we were unable to recover it. 00:30:28.542 [2024-11-19 10:58:18.045212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.045262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.045524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.045556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.045795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.045826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.046096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.046129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.046312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.046346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.046537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.046570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.046768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.046799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.047067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.047101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.047343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.047379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.047640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.047674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.047915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.047948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.048161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.048193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.048457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.048489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.048733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.048764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.049009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.049041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.049290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.049323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.049584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.049616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.049895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.049928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.050142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.050173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.050456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.050497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.050745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.050777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.050987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.051019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.051286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.051321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.051529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.051562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.051852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.051888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.052111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.052146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.052406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.052443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.052728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.052763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.053029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.053064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.053258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.053293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.053479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.053511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.053798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.053833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.054103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.054141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.054420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.054454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.054596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.054629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.054892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.054925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.055145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.055177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.055367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.543 [2024-11-19 10:58:18.055401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.543 qpair failed and we were unable to recover it. 00:30:28.543 [2024-11-19 10:58:18.055590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.055622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.055801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.055833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.056094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.056126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.056413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.056446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.056714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.056745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.056945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.056977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.057157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.057190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.057462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.057494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.057754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.057786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.058073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.058106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.058345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.058380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.058618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.058650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.058934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.058967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.059157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.059190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.059463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.059495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.059730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.059762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.059975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.060006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.060219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.060251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.060527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.060559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.060685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.060716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.060922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.060954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.061156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.061195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.061410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.061443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.061700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.061732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.061936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.061968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.062160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.062192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.062452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.062484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.062667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.062699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.062891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.062922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.063094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.063126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.063304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.063338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.063548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.063580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.063782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.063815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.064003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.064035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.064172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.064212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.064344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.064376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.064619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.544 [2024-11-19 10:58:18.064652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.544 qpair failed and we were unable to recover it. 00:30:28.544 [2024-11-19 10:58:18.064842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.064873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.065139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.065171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.065352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.065385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.065506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.065538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.065816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.065848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.066096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.066129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.066389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.066424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.066688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.066720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.066852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.066884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.067055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.067087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.067337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.067372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.067596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.067630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.067897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.067930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.068212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.068244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.068421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.068454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.068635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.068667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.068850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.068882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.069069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.069101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.069224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.069257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.069452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.069484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.069621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.069653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.069828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.069858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.070128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.070160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.070371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.070404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.070590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.070633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.070839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.070871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.071040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.071070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.071311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.071344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.071473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.071504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.071742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.071774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.071915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.071945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.072218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.072250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.072434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.072466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.545 qpair failed and we were unable to recover it. 00:30:28.545 [2024-11-19 10:58:18.072603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.545 [2024-11-19 10:58:18.072633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.072804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.072837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.073075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.073107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.073300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.073333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.073452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.073483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.073726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.073759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.074031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.074062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.074302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.074336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.074594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.074625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.074867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.074899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.075164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.075195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.075405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.075437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.075624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.075655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.075923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.075954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.076264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.076299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.076549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.076582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.076818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.076850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.077041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.077072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.077259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.077292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.077551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.077582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.077770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.077802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.077996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.078028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.078244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.078277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.078536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.078568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.078702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.078734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.078919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.078950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.079191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.079232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.079488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.079520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.079803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.079835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.080022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.080053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.080306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.080340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.080599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.080636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.080923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.080955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.081220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.081253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.081544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.081576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.081844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.081876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.082117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.082149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.082379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.546 [2024-11-19 10:58:18.082413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.546 qpair failed and we were unable to recover it. 00:30:28.546 [2024-11-19 10:58:18.082661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.082693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.082879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.082911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.083148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.083180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.083475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.083507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.083746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.083779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.084024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.084056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.084246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.084279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.084470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.084502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.084741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.084773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.085031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.085062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.085250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.085283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.085459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.085491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.085729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.085761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.085959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.085990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.086180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.086239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.086528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.086560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.086847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.086878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.087120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.087152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.087419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.087452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.087744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.087775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.088064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.088097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.088271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.088304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.088545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.088576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.088755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.088787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.088956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.088987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.089272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.089305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.089542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.089573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.089844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.089876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.090162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.090193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.090426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.090459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.090631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.090662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.090929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.090960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.091209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.091242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.091436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.091475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.091739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.091770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.091913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.091946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.092128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.092159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.547 [2024-11-19 10:58:18.092351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.547 [2024-11-19 10:58:18.092384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.547 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.092669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.092700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.092991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.093023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.093234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.093268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.093508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.093538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.093709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.093741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.094007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.094038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.094238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.094270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.094456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.094488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.094748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.094780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.094970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.095002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.095263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.095297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.095536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.095567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.095772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.095804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.096057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.096088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.096346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.096379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.096666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.096697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.096973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.097005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.097272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.097305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.097589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.097621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.097810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.097841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.098084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.098115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.098406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.098438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.098582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.098613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.098850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.098882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.099066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.099097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.099267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.099301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.099561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.099593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.099766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.099798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.099988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.100019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.100289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.100322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.100531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.100562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.100670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.100701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.100967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.100997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.101284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.101318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.101557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.548 [2024-11-19 10:58:18.101588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.548 qpair failed and we were unable to recover it. 00:30:28.548 [2024-11-19 10:58:18.101832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.101870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.102157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.102189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.102521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.102554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.102790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.102821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.103013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.103045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.103176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.103216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.103412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.103444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.103657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.103688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.103869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.103901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.104035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.104065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.104279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.104312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.104505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.104536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.104723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.104754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.104926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.104958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.105231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.105264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.105508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.105540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.105789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.105821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.106083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.106115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.106291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.106323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.106429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.106460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.106659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.106692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.106953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.106983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.107224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.107256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.107541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.107573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.107784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.107815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.108029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.108061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.108299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.108332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.108601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.108632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.108906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.108937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.109221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.109255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.109528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.109560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.109835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.109867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.549 qpair failed and we were unable to recover it. 00:30:28.549 [2024-11-19 10:58:18.110146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.549 [2024-11-19 10:58:18.110177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.110460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.110492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.110734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.110766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.110985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.111017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.111215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.111248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.111485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.111516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.111702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.111734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.111997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.112028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.112267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.112305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.112543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.112574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.112829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.112861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.113149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.113181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.113456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.113488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.113756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.113787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.113957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.113990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.114193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.114232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.114388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.114420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.114611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.114643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.114893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.114923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.115120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.115152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.550 [2024-11-19 10:58:18.115429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.115462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:28.550 [2024-11-19 10:58:18.115699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.115733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:28.550 [2024-11-19 10:58:18.116001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.116036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:28.550 [2024-11-19 10:58:18.116320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.116355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.550 [2024-11-19 10:58:18.116622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.116654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.116944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.116977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.117114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.117146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.117416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.117449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.117570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.117601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.550 [2024-11-19 10:58:18.117870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.550 [2024-11-19 10:58:18.117903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.550 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.118145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.118176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.118388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.118421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.118659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.118690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.118910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.118942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.119200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.119241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.119500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.119532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.119817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.119849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.120078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.120110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.120316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.120351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.120592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.120625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.120884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.120916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.121106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.121136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.121341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.121373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.121619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.121651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.121915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.121946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.122132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.122163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.122438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.122477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.122670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.122702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.122955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.122988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.123249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.123282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.123489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.123520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.123769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.123802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.124035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.124067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.124179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.124222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.124437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.124469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.124705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.124737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.125043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.125075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.125346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.125380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.125566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.125598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.125859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.125891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.126126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.126158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.126371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.126405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.126644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.126678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.126866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.126899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.127071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.551 [2024-11-19 10:58:18.127104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.551 qpair failed and we were unable to recover it. 00:30:28.551 [2024-11-19 10:58:18.127347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.127383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.127555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.127585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.127778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.127811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.128048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.128081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.128312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.128344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.128627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.128659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.128954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.128987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.129263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.129297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.129443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.129476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.129669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.129702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.129843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.129874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.130058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.130090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.130281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.130314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.130552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.130583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.130826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.130858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.131118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.131149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.131377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.131410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.131742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.131801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.132051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.132084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.132323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.132356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.132486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.132518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.132779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.132820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.133000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.133032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.133162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.133194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.133337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.133375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.133636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.133669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.133944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.133976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.134260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.134293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.134437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.134472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.134709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.134743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.135010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.135042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.552 [2024-11-19 10:58:18.135315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.552 [2024-11-19 10:58:18.135348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.552 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.135545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.135576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.135771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.135806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.136068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.136100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.136355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.136388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.136625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.136657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.136773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.136805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.136941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.136971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.137258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.137291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.137463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.137494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.137682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.137713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.138019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.138050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.138352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.138385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.138562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.138594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.138773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.138805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.139031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.139063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.139191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.139235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.139390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.139421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.139567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.139600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.139790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.139822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.140063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.140095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.140336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.140369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.140507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.140539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.140725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.140756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.141006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.141040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.141251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.141283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.141473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.141504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.141694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.141727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.141937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.141969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.142214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.142246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.142538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.142576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.142725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.142757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.142930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.142962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.143234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.143267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.143389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.143419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.143679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.553 [2024-11-19 10:58:18.143712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.553 qpair failed and we were unable to recover it. 00:30:28.553 [2024-11-19 10:58:18.143993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.144024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.144215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.144248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.144440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.144473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.144593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.144623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.144854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.144886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.145147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.145179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.145374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.145406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.145593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.145625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.145927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.145960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.146149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.146183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.146349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.146382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.146518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.146549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.146841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.146873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.147065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.147096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.147353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.147387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.147534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.147565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.147802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.147833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.148008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.148039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.148166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.148196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.148388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.148418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.148552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.148585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.148719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.148751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.148951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.148983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.149156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.149187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.149375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.149410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.149588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.149620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.149912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.149944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.150221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.150253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.554 [2024-11-19 10:58:18.150481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.554 [2024-11-19 10:58:18.150513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.554 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.150705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.150737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.150948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.150980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.555 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.151193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.151237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:28.555 [2024-11-19 10:58:18.151458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.151492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.151682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.151720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.555 [2024-11-19 10:58:18.152004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.555 [2024-11-19 10:58:18.152037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.152227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.152261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.152406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.152436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.152577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.152609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.152782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.152814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.152954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.152985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.153234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.153274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.153456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.153487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.153628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.153659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.153932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.153964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.154178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.154221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.154403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.154434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.154657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.154690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.154944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.154974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.155186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.155228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.155421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.155453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.155640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.155672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.155916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.155947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.156126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.156158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.156372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.156404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.156671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.156703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.156961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.156994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.157244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.157279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.157472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.157505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.157771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.157804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.158060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.158113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.158433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.158502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.158661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.555 [2024-11-19 10:58:18.158695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.555 qpair failed and we were unable to recover it. 00:30:28.555 [2024-11-19 10:58:18.158963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.158995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.159260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.159293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.159533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.159565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.159899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.159931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.160220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.160253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.160514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.160546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.160737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.160769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.161033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.161064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.161247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.161280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.161494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.161526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.161664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.161706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.161947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.161979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.162161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.162192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.162356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.162389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.162527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.162558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.162752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.162783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.162919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.162951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.163224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.163256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.163451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.163483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.163725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.163757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.163995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.164026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.164222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.164255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.164524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.164556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.164823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.164855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.165072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.165105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.165350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.165383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.165526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.165558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.165751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.165783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.166047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.166078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.166264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.166297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.166468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.166499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.166679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.166710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.166994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.167026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.167265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.167297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.167558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.167590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.167826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.167857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.168104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.168136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.556 qpair failed and we were unable to recover it. 00:30:28.556 [2024-11-19 10:58:18.168305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.556 [2024-11-19 10:58:18.168346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.168497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.168530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.168704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.168737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.168996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.169027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.169220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.169252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.169438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.169469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.169729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.169761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.169933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.169965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.170238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.170271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.170558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.170591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.170876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.170908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.171176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.171219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.171476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.171508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.171687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.171718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.172006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.172039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.172330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.172363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.172506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.172538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.172731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.172762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.172970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.173003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.173268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.173301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.173584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.173615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.173893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.173924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.174214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.174247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.174434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.174467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.174719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.174751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.174936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.174969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.175139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.175171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.175453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.175496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.175831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.175863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.176130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.176162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.176453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.176486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.176683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.176715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.176928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.176961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.177252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.177285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.177523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.177555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.177764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.177796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.178059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.178090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.178382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.557 [2024-11-19 10:58:18.178415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.557 qpair failed and we were unable to recover it. 00:30:28.557 [2024-11-19 10:58:18.178654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.178687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.178871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.178903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.179095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.179127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.179397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.179430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.179675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.179707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.179973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.180005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.180246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.180278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.180546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.180577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.180864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.180897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.181167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.181199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.181456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.181488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.181628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.181660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.181846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 Malloc0 00:30:28.558 [2024-11-19 10:58:18.181877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.182142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.182174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b34000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.182321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.182359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.182563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.182595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b9 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.558 0 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.182896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.182930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:28.558 [2024-11-19 10:58:18.183188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.183233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.558 [2024-11-19 10:58:18.183519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.183551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.558 [2024-11-19 10:58:18.183741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.183773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.183955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.183987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.184288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.184322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.184605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.184637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.184827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.184859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.185102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.185134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.185373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.185406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.185643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.185674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.185936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.185974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.186239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.186272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.186556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.186588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.186857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.186888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.187076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.187107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.187375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.187407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.187695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.558 [2024-11-19 10:58:18.187727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.558 qpair failed and we were unable to recover it. 00:30:28.558 [2024-11-19 10:58:18.187869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.187901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.188161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.188192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.188382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.188414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.188629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.188661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.188899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.188930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.189139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.189170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b38000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.189386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.189422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.189517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.559 [2024-11-19 10:58:18.189670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.189703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.189988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.190019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.190219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.190253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.190520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.190552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.190793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.190824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.191036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.191068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.191331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.191365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.191545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.191577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.192001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.192038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.192262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.192300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.192569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.192601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.192882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.192914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.193049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.193081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cba0 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.193346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.193394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.193634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.193668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.193957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.193990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.194241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.194275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.194579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.194612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.194866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.194898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.195184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.195225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.195492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.195523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.195803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.195836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.196044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.196074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.196262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.559 [2024-11-19 10:58:18.196296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.559 qpair failed and we were unable to recover it. 00:30:28.559 [2024-11-19 10:58:18.196415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.196447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.196660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.196692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.196895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.196934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.197127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.197160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.197429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.197461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.197645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.197678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.197945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.197977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.198219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.560 [2024-11-19 10:58:18.198252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.198516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.198548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:28.560 [2024-11-19 10:58:18.198832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.198864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.560 [2024-11-19 10:58:18.199067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.199098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.560 [2024-11-19 10:58:18.199277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.199310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.199572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.199603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.199853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.199886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.200089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.200121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.200380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.200414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.200535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.200567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.200829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.200862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.201075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.201106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.201363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.201396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.201646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.201678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.201919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.201950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.202136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.202168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.202353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.202387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.202624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.202656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.202914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.202945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.203183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.203222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.203450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.203487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.203747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.203780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.560 [2024-11-19 10:58:18.204021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.560 [2024-11-19 10:58:18.204053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.560 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.204317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.204351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.204637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.204668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.204939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.204972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.205156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.205188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.205398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.205431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.205667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.205698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.205940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.205972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.206097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.206128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.561 [2024-11-19 10:58:18.206383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.206417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:28.561 [2024-11-19 10:58:18.206613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.206645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.206908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.561 [2024-11-19 10:58:18.206942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.207155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.207188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.561 [2024-11-19 10:58:18.207315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.207347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.207541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.207573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.207813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.207845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.208060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.208093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.208364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.208398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.208680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.208712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.208980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.209012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.209255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.209289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.209465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.209496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.209784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.209817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.210067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.210099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.210392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.210426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.210660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.210692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.210954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.210986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.211235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.211268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.211529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.211561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.211846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.211878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.212134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.212166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.212460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.212495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.212670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.212701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.212963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.212995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.561 [2024-11-19 10:58:18.213169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.561 [2024-11-19 10:58:18.213211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.561 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.213425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.213456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.213693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.213732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.213995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.214026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.214260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.562 [2024-11-19 10:58:18.214295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.214499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.214530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.562 [2024-11-19 10:58:18.214699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.214731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.214967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.562 [2024-11-19 10:58:18.214999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.215261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.562 [2024-11-19 10:58:18.215294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.215578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.215610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.215883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.215915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.216183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.216231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.216510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.216543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.216806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.216836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.217104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.217136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.217346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.217379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.217585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.562 [2024-11-19 10:58:18.217617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6b40000b90 with addr=10.0.0.2, port=4420 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.217736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.562 [2024-11-19 10:58:18.220220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.562 [2024-11-19 10:58:18.220348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.562 [2024-11-19 10:58:18.220399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.562 [2024-11-19 10:58:18.220422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.562 [2024-11-19 10:58:18.220442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.562 [2024-11-19 10:58:18.220495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.562 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:28.562 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.562 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.562 [2024-11-19 10:58:18.230128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.562 [2024-11-19 10:58:18.230250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.562 [2024-11-19 10:58:18.230284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.562 [2024-11-19 10:58:18.230303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.562 [2024-11-19 10:58:18.230320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.562 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.562 [2024-11-19 10:58:18.230359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 10:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4091098 00:30:28.562 [2024-11-19 10:58:18.240117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.562 [2024-11-19 10:58:18.240189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.562 [2024-11-19 10:58:18.240223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.562 [2024-11-19 10:58:18.240235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.562 [2024-11-19 10:58:18.240246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.562 [2024-11-19 10:58:18.240272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.250110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.562 [2024-11-19 10:58:18.250189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.562 [2024-11-19 10:58:18.250208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.562 [2024-11-19 10:58:18.250217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.562 [2024-11-19 10:58:18.250225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.562 [2024-11-19 10:58:18.250243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.562 qpair failed and we were unable to recover it. 00:30:28.562 [2024-11-19 10:58:18.260085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.562 [2024-11-19 10:58:18.260144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.562 [2024-11-19 10:58:18.260157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.563 [2024-11-19 10:58:18.260165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.563 [2024-11-19 10:58:18.260170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.563 [2024-11-19 10:58:18.260184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.563 qpair failed and we were unable to recover it. 00:30:28.563 [2024-11-19 10:58:18.270143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.563 [2024-11-19 10:58:18.270195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.563 [2024-11-19 10:58:18.270212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.563 [2024-11-19 10:58:18.270219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.563 [2024-11-19 10:58:18.270225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.563 [2024-11-19 10:58:18.270240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.563 qpair failed and we were unable to recover it. 00:30:28.563 [2024-11-19 10:58:18.280122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.563 [2024-11-19 10:58:18.280180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.563 [2024-11-19 10:58:18.280193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.563 [2024-11-19 10:58:18.280204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.563 [2024-11-19 10:58:18.280214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.563 [2024-11-19 10:58:18.280229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.563 qpair failed and we were unable to recover it. 00:30:28.563 [2024-11-19 10:58:18.290152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.563 [2024-11-19 10:58:18.290216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.563 [2024-11-19 10:58:18.290231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.563 [2024-11-19 10:58:18.290237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.563 [2024-11-19 10:58:18.290243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.563 [2024-11-19 10:58:18.290258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.563 qpair failed and we were unable to recover it. 00:30:28.563 [2024-11-19 10:58:18.300219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.563 [2024-11-19 10:58:18.300286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.563 [2024-11-19 10:58:18.300300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.563 [2024-11-19 10:58:18.300306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.563 [2024-11-19 10:58:18.300312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.563 [2024-11-19 10:58:18.300326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.563 qpair failed and we were unable to recover it. 00:30:28.823 [2024-11-19 10:58:18.310239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.823 [2024-11-19 10:58:18.310288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.823 [2024-11-19 10:58:18.310302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.823 [2024-11-19 10:58:18.310308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.823 [2024-11-19 10:58:18.310314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.823 [2024-11-19 10:58:18.310329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.823 qpair failed and we were unable to recover it. 00:30:28.823 [2024-11-19 10:58:18.320268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.823 [2024-11-19 10:58:18.320316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.823 [2024-11-19 10:58:18.320329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.823 [2024-11-19 10:58:18.320336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.823 [2024-11-19 10:58:18.320342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.823 [2024-11-19 10:58:18.320357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.823 qpair failed and we were unable to recover it. 00:30:28.823 [2024-11-19 10:58:18.330274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.823 [2024-11-19 10:58:18.330358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.823 [2024-11-19 10:58:18.330371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.823 [2024-11-19 10:58:18.330378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.823 [2024-11-19 10:58:18.330384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.823 [2024-11-19 10:58:18.330399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.823 qpair failed and we were unable to recover it. 00:30:28.823 [2024-11-19 10:58:18.340313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.823 [2024-11-19 10:58:18.340369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.823 [2024-11-19 10:58:18.340383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.823 [2024-11-19 10:58:18.340389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.823 [2024-11-19 10:58:18.340395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.823 [2024-11-19 10:58:18.340410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.823 qpair failed and we were unable to recover it. 00:30:28.823 [2024-11-19 10:58:18.350324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.823 [2024-11-19 10:58:18.350377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.823 [2024-11-19 10:58:18.350390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.823 [2024-11-19 10:58:18.350397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.823 [2024-11-19 10:58:18.350403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.823 [2024-11-19 10:58:18.350417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.823 qpair failed and we were unable to recover it. 00:30:28.823 [2024-11-19 10:58:18.360325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.823 [2024-11-19 10:58:18.360377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.823 [2024-11-19 10:58:18.360390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.823 [2024-11-19 10:58:18.360397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.823 [2024-11-19 10:58:18.360403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.823 [2024-11-19 10:58:18.360418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.823 qpair failed and we were unable to recover it. 00:30:28.823 [2024-11-19 10:58:18.370441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.823 [2024-11-19 10:58:18.370540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.823 [2024-11-19 10:58:18.370558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.823 [2024-11-19 10:58:18.370565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.823 [2024-11-19 10:58:18.370570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.823 [2024-11-19 10:58:18.370584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.823 qpair failed and we were unable to recover it. 00:30:28.823 [2024-11-19 10:58:18.380420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.823 [2024-11-19 10:58:18.380472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.823 [2024-11-19 10:58:18.380485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.823 [2024-11-19 10:58:18.380492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.823 [2024-11-19 10:58:18.380498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.823 [2024-11-19 10:58:18.380512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.823 qpair failed and we were unable to recover it. 00:30:28.823 [2024-11-19 10:58:18.390430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.823 [2024-11-19 10:58:18.390486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.823 [2024-11-19 10:58:18.390500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.823 [2024-11-19 10:58:18.390507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.823 [2024-11-19 10:58:18.390512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.823 [2024-11-19 10:58:18.390527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.823 qpair failed and we were unable to recover it. 00:30:28.823 [2024-11-19 10:58:18.400447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.823 [2024-11-19 10:58:18.400501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.823 [2024-11-19 10:58:18.400515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.400521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.400528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.400542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.410491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.410545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.410559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.410569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.410574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.410589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.420520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.420573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.420586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.420592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.420598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.420613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.430559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.430634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.430648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.430655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.430661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.430676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.440585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.440633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.440647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.440653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.440659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.440674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.450615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.450670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.450683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.450690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.450697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.450715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.460653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.460712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.460725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.460733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.460738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.460753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.470678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.470728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.470741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.470748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.470754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.470769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.480760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.480830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.480843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.480850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.480856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.480872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.490736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.490791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.490805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.490811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.490818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.490832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.500780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.500884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.500898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.500905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.500910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.500925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.510775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.510827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.510840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.510847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.510853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.510867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.520802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.520866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.824 [2024-11-19 10:58:18.520879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.824 [2024-11-19 10:58:18.520886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.824 [2024-11-19 10:58:18.520892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.824 [2024-11-19 10:58:18.520906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.824 qpair failed and we were unable to recover it. 00:30:28.824 [2024-11-19 10:58:18.530837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.824 [2024-11-19 10:58:18.530892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.825 [2024-11-19 10:58:18.530906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.825 [2024-11-19 10:58:18.530912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.825 [2024-11-19 10:58:18.530918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.825 [2024-11-19 10:58:18.530932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.825 qpair failed and we were unable to recover it. 00:30:28.825 [2024-11-19 10:58:18.540884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.825 [2024-11-19 10:58:18.540962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.825 [2024-11-19 10:58:18.540975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.825 [2024-11-19 10:58:18.540985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.825 [2024-11-19 10:58:18.540991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.825 [2024-11-19 10:58:18.541006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.825 qpair failed and we were unable to recover it. 00:30:28.825 [2024-11-19 10:58:18.550889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.825 [2024-11-19 10:58:18.550972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.825 [2024-11-19 10:58:18.550985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.825 [2024-11-19 10:58:18.550992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.825 [2024-11-19 10:58:18.550997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.825 [2024-11-19 10:58:18.551012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.825 qpair failed and we were unable to recover it. 00:30:28.825 [2024-11-19 10:58:18.560947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.825 [2024-11-19 10:58:18.561001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.825 [2024-11-19 10:58:18.561014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.825 [2024-11-19 10:58:18.561021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.825 [2024-11-19 10:58:18.561027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.825 [2024-11-19 10:58:18.561041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.825 qpair failed and we were unable to recover it. 00:30:28.825 [2024-11-19 10:58:18.570992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.825 [2024-11-19 10:58:18.571099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.825 [2024-11-19 10:58:18.571112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.825 [2024-11-19 10:58:18.571119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.825 [2024-11-19 10:58:18.571125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.825 [2024-11-19 10:58:18.571139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.825 qpair failed and we were unable to recover it. 00:30:28.825 [2024-11-19 10:58:18.580991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.825 [2024-11-19 10:58:18.581052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.825 [2024-11-19 10:58:18.581065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.825 [2024-11-19 10:58:18.581072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.825 [2024-11-19 10:58:18.581078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.825 [2024-11-19 10:58:18.581095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.825 qpair failed and we were unable to recover it. 00:30:28.825 [2024-11-19 10:58:18.591008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.825 [2024-11-19 10:58:18.591061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.825 [2024-11-19 10:58:18.591075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.825 [2024-11-19 10:58:18.591082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.825 [2024-11-19 10:58:18.591088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.825 [2024-11-19 10:58:18.591102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.825 qpair failed and we were unable to recover it. 00:30:28.825 [2024-11-19 10:58:18.601042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.825 [2024-11-19 10:58:18.601098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.825 [2024-11-19 10:58:18.601112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.825 [2024-11-19 10:58:18.601119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.825 [2024-11-19 10:58:18.601125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:28.825 [2024-11-19 10:58:18.601139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.825 qpair failed and we were unable to recover it. 00:30:29.085 [2024-11-19 10:58:18.611112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.085 [2024-11-19 10:58:18.611169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.085 [2024-11-19 10:58:18.611183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.085 [2024-11-19 10:58:18.611189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.085 [2024-11-19 10:58:18.611196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.085 [2024-11-19 10:58:18.611216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.085 qpair failed and we were unable to recover it. 00:30:29.085 [2024-11-19 10:58:18.621163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.085 [2024-11-19 10:58:18.621268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.085 [2024-11-19 10:58:18.621282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.085 [2024-11-19 10:58:18.621289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.085 [2024-11-19 10:58:18.621294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.085 [2024-11-19 10:58:18.621309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.085 qpair failed and we were unable to recover it. 00:30:29.085 [2024-11-19 10:58:18.631140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.085 [2024-11-19 10:58:18.631193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.085 [2024-11-19 10:58:18.631211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.085 [2024-11-19 10:58:18.631218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.085 [2024-11-19 10:58:18.631224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.085 [2024-11-19 10:58:18.631240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.085 qpair failed and we were unable to recover it. 00:30:29.085 [2024-11-19 10:58:18.641149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.085 [2024-11-19 10:58:18.641208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.085 [2024-11-19 10:58:18.641223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.085 [2024-11-19 10:58:18.641230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.085 [2024-11-19 10:58:18.641235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.085 [2024-11-19 10:58:18.641251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.085 qpair failed and we were unable to recover it. 00:30:29.086 [2024-11-19 10:58:18.651212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.086 [2024-11-19 10:58:18.651269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.086 [2024-11-19 10:58:18.651282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.086 [2024-11-19 10:58:18.651289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.086 [2024-11-19 10:58:18.651295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.086 [2024-11-19 10:58:18.651309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.086 qpair failed and we were unable to recover it. 00:30:29.086 [2024-11-19 10:58:18.661220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.086 [2024-11-19 10:58:18.661290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.086 [2024-11-19 10:58:18.661304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.086 [2024-11-19 10:58:18.661310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.086 [2024-11-19 10:58:18.661316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.086 [2024-11-19 10:58:18.661331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.086 qpair failed and we were unable to recover it. 00:30:29.086 [2024-11-19 10:58:18.671171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.086 [2024-11-19 10:58:18.671231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.086 [2024-11-19 10:58:18.671247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.086 [2024-11-19 10:58:18.671254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.086 [2024-11-19 10:58:18.671260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.086 [2024-11-19 10:58:18.671275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.086 qpair failed and we were unable to recover it. 00:30:29.086 [2024-11-19 10:58:18.681313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.086 [2024-11-19 10:58:18.681373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.086 [2024-11-19 10:58:18.681385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.086 [2024-11-19 10:58:18.681392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.086 [2024-11-19 10:58:18.681398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.086 [2024-11-19 10:58:18.681412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.086 qpair failed and we were unable to recover it. 00:30:29.086 [2024-11-19 10:58:18.691383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.086 [2024-11-19 10:58:18.691457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.086 [2024-11-19 10:58:18.691471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.086 [2024-11-19 10:58:18.691478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.086 [2024-11-19 10:58:18.691483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.086 [2024-11-19 10:58:18.691498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.086 qpair failed and we were unable to recover it. 00:30:29.086 [2024-11-19 10:58:18.701352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.086 [2024-11-19 10:58:18.701411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.086 [2024-11-19 10:58:18.701424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.086 [2024-11-19 10:58:18.701431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.086 [2024-11-19 10:58:18.701437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.086 [2024-11-19 10:58:18.701451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.086 qpair failed and we were unable to recover it. 00:30:29.086 [2024-11-19 10:58:18.711346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.086 [2024-11-19 10:58:18.711400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.086 [2024-11-19 10:58:18.711413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.086 [2024-11-19 10:58:18.711419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.086 [2024-11-19 10:58:18.711428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.086 [2024-11-19 10:58:18.711442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.086 qpair failed and we were unable to recover it. 00:30:29.086 [2024-11-19 10:58:18.721402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.086 [2024-11-19 10:58:18.721467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.086 [2024-11-19 10:58:18.721480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.086 [2024-11-19 10:58:18.721487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.086 [2024-11-19 10:58:18.721493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.086 [2024-11-19 10:58:18.721507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.086 qpair failed and we were unable to recover it. 00:30:29.086 [2024-11-19 10:58:18.731465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.086 [2024-11-19 10:58:18.731518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.086 [2024-11-19 10:58:18.731532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.086 [2024-11-19 10:58:18.731538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.086 [2024-11-19 10:58:18.731544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.086 [2024-11-19 10:58:18.731559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.086 qpair failed and we were unable to recover it. 00:30:29.086 [2024-11-19 10:58:18.741374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.086 [2024-11-19 10:58:18.741433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.086 [2024-11-19 10:58:18.741445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.086 [2024-11-19 10:58:18.741452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.086 [2024-11-19 10:58:18.741459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.741473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.751470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.751522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.751534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.751541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.751547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.751561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.761510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.761575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.761587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.761594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.761600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.761614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.771524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.771580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.771592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.771599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.771605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.771619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.781547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.781605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.781618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.781624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.781630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.781644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.791492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.791540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.791553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.791560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.791566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.791580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.801595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.801674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.801690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.801697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.801702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.801717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.811630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.811683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.811696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.811703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.811709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.811723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.821670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.821732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.821745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.821752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.821760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.821774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.831679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.831749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.831762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.831769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.831775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.831790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.841666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.841727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.841742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.841749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.841759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.841775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.851758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.851819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.851834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.851841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.851847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.851861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.087 qpair failed and we were unable to recover it. 00:30:29.087 [2024-11-19 10:58:18.861696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.087 [2024-11-19 10:58:18.861758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.087 [2024-11-19 10:58:18.861772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.087 [2024-11-19 10:58:18.861779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.087 [2024-11-19 10:58:18.861785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.087 [2024-11-19 10:58:18.861799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.088 qpair failed and we were unable to recover it. 00:30:29.088 [2024-11-19 10:58:18.871840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.088 [2024-11-19 10:58:18.871900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.088 [2024-11-19 10:58:18.871914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.088 [2024-11-19 10:58:18.871921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.088 [2024-11-19 10:58:18.871927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.088 [2024-11-19 10:58:18.871942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.088 qpair failed and we were unable to recover it. 00:30:29.348 [2024-11-19 10:58:18.881834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.348 [2024-11-19 10:58:18.881912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.348 [2024-11-19 10:58:18.881927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.348 [2024-11-19 10:58:18.881934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.348 [2024-11-19 10:58:18.881940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.348 [2024-11-19 10:58:18.881956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.348 qpair failed and we were unable to recover it. 00:30:29.348 [2024-11-19 10:58:18.891798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.348 [2024-11-19 10:58:18.891856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.348 [2024-11-19 10:58:18.891870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.348 [2024-11-19 10:58:18.891877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.348 [2024-11-19 10:58:18.891882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.348 [2024-11-19 10:58:18.891897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.348 qpair failed and we were unable to recover it. 00:30:29.348 [2024-11-19 10:58:18.901873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.348 [2024-11-19 10:58:18.901930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.348 [2024-11-19 10:58:18.901943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.348 [2024-11-19 10:58:18.901950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.348 [2024-11-19 10:58:18.901956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.348 [2024-11-19 10:58:18.901971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.348 qpair failed and we were unable to recover it. 00:30:29.348 [2024-11-19 10:58:18.911840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.348 [2024-11-19 10:58:18.911898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.348 [2024-11-19 10:58:18.911911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.348 [2024-11-19 10:58:18.911918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.348 [2024-11-19 10:58:18.911924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.348 [2024-11-19 10:58:18.911938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.348 qpair failed and we were unable to recover it. 00:30:29.348 [2024-11-19 10:58:18.921873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.348 [2024-11-19 10:58:18.921928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.348 [2024-11-19 10:58:18.921942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.348 [2024-11-19 10:58:18.921948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.348 [2024-11-19 10:58:18.921954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.348 [2024-11-19 10:58:18.921969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.348 qpair failed and we were unable to recover it. 00:30:29.348 [2024-11-19 10:58:18.931905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.348 [2024-11-19 10:58:18.931993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.348 [2024-11-19 10:58:18.932010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.348 [2024-11-19 10:58:18.932017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.348 [2024-11-19 10:58:18.932022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.348 [2024-11-19 10:58:18.932037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.348 qpair failed and we were unable to recover it. 00:30:29.348 [2024-11-19 10:58:18.942030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.348 [2024-11-19 10:58:18.942089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.348 [2024-11-19 10:58:18.942102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.348 [2024-11-19 10:58:18.942109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.348 [2024-11-19 10:58:18.942115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.348 [2024-11-19 10:58:18.942129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.348 qpair failed and we were unable to recover it. 00:30:29.348 [2024-11-19 10:58:18.951943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.348 [2024-11-19 10:58:18.951990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.348 [2024-11-19 10:58:18.952004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.348 [2024-11-19 10:58:18.952011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.348 [2024-11-19 10:58:18.952017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.348 [2024-11-19 10:58:18.952032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.348 qpair failed and we were unable to recover it. 00:30:29.348 [2024-11-19 10:58:18.962034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.348 [2024-11-19 10:58:18.962095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.348 [2024-11-19 10:58:18.962109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.348 [2024-11-19 10:58:18.962116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.348 [2024-11-19 10:58:18.962121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.348 [2024-11-19 10:58:18.962136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.348 qpair failed and we were unable to recover it. 00:30:29.348 [2024-11-19 10:58:18.972085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.348 [2024-11-19 10:58:18.972190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.348 [2024-11-19 10:58:18.972209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.348 [2024-11-19 10:58:18.972219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.348 [2024-11-19 10:58:18.972225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.348 [2024-11-19 10:58:18.972241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.348 qpair failed and we were unable to recover it. 00:30:29.348 [2024-11-19 10:58:18.982056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.348 [2024-11-19 10:58:18.982110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.348 [2024-11-19 10:58:18.982124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.348 [2024-11-19 10:58:18.982131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.348 [2024-11-19 10:58:18.982137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:18.982151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:18.992078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:18.992132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:18.992146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:18.992153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:18.992159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:18.992174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.002130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.002212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.002227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.002233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.002239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:19.002254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.012238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.012304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.012318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.012324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.012330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:19.012348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.022348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.022419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.022433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.022439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.022445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:19.022461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.032276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.032328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.032342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.032349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.032355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:19.032370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.042255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.042310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.042324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.042331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.042337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:19.042352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.052275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.052331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.052345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.052351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.052357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:19.052372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.062289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.062349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.062364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.062370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.062376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:19.062391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.072315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.072375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.072390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.072397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.072403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:19.072418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.082411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.082468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.082482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.082489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.082494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:19.082510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.092430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.092484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.092498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.092506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.092512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:19.092526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.102497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.102582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.102596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.102608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.102614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.349 [2024-11-19 10:58:19.102629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.349 qpair failed and we were unable to recover it. 00:30:29.349 [2024-11-19 10:58:19.112479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.349 [2024-11-19 10:58:19.112533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.349 [2024-11-19 10:58:19.112546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.349 [2024-11-19 10:58:19.112553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.349 [2024-11-19 10:58:19.112559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.350 [2024-11-19 10:58:19.112574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.350 qpair failed and we were unable to recover it. 00:30:29.350 [2024-11-19 10:58:19.122513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.350 [2024-11-19 10:58:19.122601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.350 [2024-11-19 10:58:19.122615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.350 [2024-11-19 10:58:19.122621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.350 [2024-11-19 10:58:19.122627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.350 [2024-11-19 10:58:19.122643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.350 qpair failed and we were unable to recover it. 00:30:29.350 [2024-11-19 10:58:19.132541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.350 [2024-11-19 10:58:19.132594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.350 [2024-11-19 10:58:19.132608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.350 [2024-11-19 10:58:19.132615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.350 [2024-11-19 10:58:19.132621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.350 [2024-11-19 10:58:19.132635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.350 qpair failed and we were unable to recover it. 00:30:29.610 [2024-11-19 10:58:19.142593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.610 [2024-11-19 10:58:19.142645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.610 [2024-11-19 10:58:19.142659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.610 [2024-11-19 10:58:19.142666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.610 [2024-11-19 10:58:19.142672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.610 [2024-11-19 10:58:19.142690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.610 qpair failed and we were unable to recover it. 00:30:29.610 [2024-11-19 10:58:19.152547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.610 [2024-11-19 10:58:19.152599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.610 [2024-11-19 10:58:19.152613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.610 [2024-11-19 10:58:19.152619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.610 [2024-11-19 10:58:19.152625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.610 [2024-11-19 10:58:19.152640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.610 qpair failed and we were unable to recover it. 00:30:29.610 [2024-11-19 10:58:19.162543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.610 [2024-11-19 10:58:19.162604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.610 [2024-11-19 10:58:19.162618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.610 [2024-11-19 10:58:19.162625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.610 [2024-11-19 10:58:19.162631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.610 [2024-11-19 10:58:19.162646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.610 qpair failed and we were unable to recover it. 00:30:29.610 [2024-11-19 10:58:19.172590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.610 [2024-11-19 10:58:19.172643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.610 [2024-11-19 10:58:19.172656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.610 [2024-11-19 10:58:19.172663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.610 [2024-11-19 10:58:19.172669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.610 [2024-11-19 10:58:19.172683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.610 qpair failed and we were unable to recover it. 00:30:29.610 [2024-11-19 10:58:19.182607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.610 [2024-11-19 10:58:19.182662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.610 [2024-11-19 10:58:19.182676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.610 [2024-11-19 10:58:19.182683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.610 [2024-11-19 10:58:19.182689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.610 [2024-11-19 10:58:19.182703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.610 qpair failed and we were unable to recover it. 00:30:29.610 [2024-11-19 10:58:19.192642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.610 [2024-11-19 10:58:19.192696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.610 [2024-11-19 10:58:19.192711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.610 [2024-11-19 10:58:19.192718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.610 [2024-11-19 10:58:19.192724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.610 [2024-11-19 10:58:19.192739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.610 qpair failed and we were unable to recover it. 00:30:29.610 [2024-11-19 10:58:19.202676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.610 [2024-11-19 10:58:19.202744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.610 [2024-11-19 10:58:19.202759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.610 [2024-11-19 10:58:19.202766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.610 [2024-11-19 10:58:19.202771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.610 [2024-11-19 10:58:19.202787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.610 qpair failed and we were unable to recover it. 00:30:29.610 [2024-11-19 10:58:19.212770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.610 [2024-11-19 10:58:19.212831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.610 [2024-11-19 10:58:19.212865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.610 [2024-11-19 10:58:19.212873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.610 [2024-11-19 10:58:19.212879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.610 [2024-11-19 10:58:19.212902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.610 qpair failed and we were unable to recover it. 00:30:29.610 [2024-11-19 10:58:19.222811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.610 [2024-11-19 10:58:19.222866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.610 [2024-11-19 10:58:19.222882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.610 [2024-11-19 10:58:19.222889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.610 [2024-11-19 10:58:19.222895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.610 [2024-11-19 10:58:19.222911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.610 qpair failed and we were unable to recover it. 00:30:29.610 [2024-11-19 10:58:19.232817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.610 [2024-11-19 10:58:19.232873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.610 [2024-11-19 10:58:19.232890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.232897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.232902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.232917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.242857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.242938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.242952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.242958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.242964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.242979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.252895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.252965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.252980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.252987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.252993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.253008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.262849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.262931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.262945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.262952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.262958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.262972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.272960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.273015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.273029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.273035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.273044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.273059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.282963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.283016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.283031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.283038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.283043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.283058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.293010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.293064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.293078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.293085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.293091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.293105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.303035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.303093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.303107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.303114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.303119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.303134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.312986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.313035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.313049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.313056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.313062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.313077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.323081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.323148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.323162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.323169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.323175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.323189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.333132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.333199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.333216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.333223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.333229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.333244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.343171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.343237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.343252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.343259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.343265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.343280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.353165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.353223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.353237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.611 [2024-11-19 10:58:19.353244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.611 [2024-11-19 10:58:19.353250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.611 [2024-11-19 10:58:19.353264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.611 qpair failed and we were unable to recover it. 00:30:29.611 [2024-11-19 10:58:19.363240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.611 [2024-11-19 10:58:19.363340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.611 [2024-11-19 10:58:19.363357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.612 [2024-11-19 10:58:19.363364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.612 [2024-11-19 10:58:19.363369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.612 [2024-11-19 10:58:19.363385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.612 qpair failed and we were unable to recover it. 00:30:29.612 [2024-11-19 10:58:19.373291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.612 [2024-11-19 10:58:19.373392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.612 [2024-11-19 10:58:19.373406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.612 [2024-11-19 10:58:19.373412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.612 [2024-11-19 10:58:19.373418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.612 [2024-11-19 10:58:19.373433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.612 qpair failed and we were unable to recover it. 00:30:29.612 [2024-11-19 10:58:19.383309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.612 [2024-11-19 10:58:19.383367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.612 [2024-11-19 10:58:19.383381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.612 [2024-11-19 10:58:19.383388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.612 [2024-11-19 10:58:19.383394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.612 [2024-11-19 10:58:19.383409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.612 qpair failed and we were unable to recover it. 00:30:29.612 [2024-11-19 10:58:19.393296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.612 [2024-11-19 10:58:19.393357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.612 [2024-11-19 10:58:19.393371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.612 [2024-11-19 10:58:19.393378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.612 [2024-11-19 10:58:19.393384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.612 [2024-11-19 10:58:19.393399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.612 qpair failed and we were unable to recover it. 00:30:29.872 [2024-11-19 10:58:19.403369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.872 [2024-11-19 10:58:19.403432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.872 [2024-11-19 10:58:19.403447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.872 [2024-11-19 10:58:19.403453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.872 [2024-11-19 10:58:19.403463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.872 [2024-11-19 10:58:19.403478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.872 qpair failed and we were unable to recover it. 00:30:29.872 [2024-11-19 10:58:19.413380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.872 [2024-11-19 10:58:19.413435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.872 [2024-11-19 10:58:19.413449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.872 [2024-11-19 10:58:19.413455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.872 [2024-11-19 10:58:19.413461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.872 [2024-11-19 10:58:19.413476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.872 qpair failed and we were unable to recover it. 00:30:29.872 [2024-11-19 10:58:19.423432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.872 [2024-11-19 10:58:19.423535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.872 [2024-11-19 10:58:19.423550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.872 [2024-11-19 10:58:19.423556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.872 [2024-11-19 10:58:19.423562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.872 [2024-11-19 10:58:19.423577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.872 qpair failed and we were unable to recover it. 00:30:29.872 [2024-11-19 10:58:19.433433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.872 [2024-11-19 10:58:19.433490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.872 [2024-11-19 10:58:19.433504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.872 [2024-11-19 10:58:19.433511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.872 [2024-11-19 10:58:19.433517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.872 [2024-11-19 10:58:19.433531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.872 qpair failed and we were unable to recover it. 00:30:29.872 [2024-11-19 10:58:19.443432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.872 [2024-11-19 10:58:19.443483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.872 [2024-11-19 10:58:19.443497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.872 [2024-11-19 10:58:19.443503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.872 [2024-11-19 10:58:19.443509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.872 [2024-11-19 10:58:19.443524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.872 qpair failed and we were unable to recover it. 00:30:29.872 [2024-11-19 10:58:19.453456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.872 [2024-11-19 10:58:19.453510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.872 [2024-11-19 10:58:19.453523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.872 [2024-11-19 10:58:19.453530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.872 [2024-11-19 10:58:19.453536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.872 [2024-11-19 10:58:19.453551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.872 qpair failed and we were unable to recover it. 00:30:29.872 [2024-11-19 10:58:19.463493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.872 [2024-11-19 10:58:19.463549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.872 [2024-11-19 10:58:19.463563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.872 [2024-11-19 10:58:19.463570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.872 [2024-11-19 10:58:19.463576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.872 [2024-11-19 10:58:19.463590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.872 qpair failed and we were unable to recover it. 00:30:29.872 [2024-11-19 10:58:19.473536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.473594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.473608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.473614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.473620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.473635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.483603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.483657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.483671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.483677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.483683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.483698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.493575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.493644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.493662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.493669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.493675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.493690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.503641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.503698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.503712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.503719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.503725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.503741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.513639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.513692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.513706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.513714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.513719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.513734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.523678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.523759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.523774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.523780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.523786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.523801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.533697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.533755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.533769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.533779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.533785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.533800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.543727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.543780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.543794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.543801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.543806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.543822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.553758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.553812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.553826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.553832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.553838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.553853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.563768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.563815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.563828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.563835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.563841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.563855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.573861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.573914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.573928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.573934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.573940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.573958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.583838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.583886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.583900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.583906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.583912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.583926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.593875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.873 [2024-11-19 10:58:19.593970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.873 [2024-11-19 10:58:19.593984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.873 [2024-11-19 10:58:19.593991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.873 [2024-11-19 10:58:19.593996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.873 [2024-11-19 10:58:19.594012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.873 qpair failed and we were unable to recover it. 00:30:29.873 [2024-11-19 10:58:19.603898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.874 [2024-11-19 10:58:19.603952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.874 [2024-11-19 10:58:19.603966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.874 [2024-11-19 10:58:19.603973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.874 [2024-11-19 10:58:19.603979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.874 [2024-11-19 10:58:19.603993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.874 qpair failed and we were unable to recover it. 00:30:29.874 [2024-11-19 10:58:19.613953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.874 [2024-11-19 10:58:19.614038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.874 [2024-11-19 10:58:19.614053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.874 [2024-11-19 10:58:19.614060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.874 [2024-11-19 10:58:19.614066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.874 [2024-11-19 10:58:19.614081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.874 qpair failed and we were unable to recover it. 00:30:29.874 [2024-11-19 10:58:19.623949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.874 [2024-11-19 10:58:19.624003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.874 [2024-11-19 10:58:19.624017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.874 [2024-11-19 10:58:19.624024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.874 [2024-11-19 10:58:19.624030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.874 [2024-11-19 10:58:19.624045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.874 qpair failed and we were unable to recover it. 00:30:29.874 [2024-11-19 10:58:19.633918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.874 [2024-11-19 10:58:19.633969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.874 [2024-11-19 10:58:19.633983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.874 [2024-11-19 10:58:19.633990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.874 [2024-11-19 10:58:19.633996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.874 [2024-11-19 10:58:19.634011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.874 qpair failed and we were unable to recover it. 00:30:29.874 [2024-11-19 10:58:19.644006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.874 [2024-11-19 10:58:19.644059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.874 [2024-11-19 10:58:19.644074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.874 [2024-11-19 10:58:19.644081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.874 [2024-11-19 10:58:19.644087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.874 [2024-11-19 10:58:19.644101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.874 qpair failed and we were unable to recover it. 00:30:29.874 [2024-11-19 10:58:19.654035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.874 [2024-11-19 10:58:19.654092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.874 [2024-11-19 10:58:19.654107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.874 [2024-11-19 10:58:19.654113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.874 [2024-11-19 10:58:19.654119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:29.874 [2024-11-19 10:58:19.654133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.874 qpair failed and we were unable to recover it. 00:30:30.132 [2024-11-19 10:58:19.664086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.132 [2024-11-19 10:58:19.664155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.132 [2024-11-19 10:58:19.664170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.132 [2024-11-19 10:58:19.664180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.132 [2024-11-19 10:58:19.664185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.132 [2024-11-19 10:58:19.664205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.132 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.674112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.133 [2024-11-19 10:58:19.674165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.133 [2024-11-19 10:58:19.674179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.133 [2024-11-19 10:58:19.674186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.133 [2024-11-19 10:58:19.674192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.133 [2024-11-19 10:58:19.674212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.684124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.133 [2024-11-19 10:58:19.684212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.133 [2024-11-19 10:58:19.684227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.133 [2024-11-19 10:58:19.684234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.133 [2024-11-19 10:58:19.684239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.133 [2024-11-19 10:58:19.684255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.694132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.133 [2024-11-19 10:58:19.694221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.133 [2024-11-19 10:58:19.694236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.133 [2024-11-19 10:58:19.694243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.133 [2024-11-19 10:58:19.694248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.133 [2024-11-19 10:58:19.694264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.704225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.133 [2024-11-19 10:58:19.704280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.133 [2024-11-19 10:58:19.704294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.133 [2024-11-19 10:58:19.704301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.133 [2024-11-19 10:58:19.704306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.133 [2024-11-19 10:58:19.704325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.714227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.133 [2024-11-19 10:58:19.714292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.133 [2024-11-19 10:58:19.714306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.133 [2024-11-19 10:58:19.714312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.133 [2024-11-19 10:58:19.714318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.133 [2024-11-19 10:58:19.714333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.724251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.133 [2024-11-19 10:58:19.724308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.133 [2024-11-19 10:58:19.724323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.133 [2024-11-19 10:58:19.724330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.133 [2024-11-19 10:58:19.724335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.133 [2024-11-19 10:58:19.724350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.734296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.133 [2024-11-19 10:58:19.734350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.133 [2024-11-19 10:58:19.734364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.133 [2024-11-19 10:58:19.734371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.133 [2024-11-19 10:58:19.734377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.133 [2024-11-19 10:58:19.734392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.744229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.133 [2024-11-19 10:58:19.744283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.133 [2024-11-19 10:58:19.744297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.133 [2024-11-19 10:58:19.744303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.133 [2024-11-19 10:58:19.744309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.133 [2024-11-19 10:58:19.744323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.754375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.133 [2024-11-19 10:58:19.754436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.133 [2024-11-19 10:58:19.754451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.133 [2024-11-19 10:58:19.754458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.133 [2024-11-19 10:58:19.754463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.133 [2024-11-19 10:58:19.754478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.764350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.133 [2024-11-19 10:58:19.764406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.133 [2024-11-19 10:58:19.764420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.133 [2024-11-19 10:58:19.764427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.133 [2024-11-19 10:58:19.764433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.133 [2024-11-19 10:58:19.764448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.774382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.133 [2024-11-19 10:58:19.774437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.133 [2024-11-19 10:58:19.774451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.133 [2024-11-19 10:58:19.774457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.133 [2024-11-19 10:58:19.774463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.133 [2024-11-19 10:58:19.774478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-11-19 10:58:19.784390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.134 [2024-11-19 10:58:19.784442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.134 [2024-11-19 10:58:19.784456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.134 [2024-11-19 10:58:19.784462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.134 [2024-11-19 10:58:19.784468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.134 [2024-11-19 10:58:19.784483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-11-19 10:58:19.794438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.134 [2024-11-19 10:58:19.794508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.134 [2024-11-19 10:58:19.794525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.134 [2024-11-19 10:58:19.794532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.134 [2024-11-19 10:58:19.794538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.134 [2024-11-19 10:58:19.794553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-11-19 10:58:19.804455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.134 [2024-11-19 10:58:19.804506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.134 [2024-11-19 10:58:19.804520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.134 [2024-11-19 10:58:19.804527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.134 [2024-11-19 10:58:19.804532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.134 [2024-11-19 10:58:19.804547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-11-19 10:58:19.814534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.134 [2024-11-19 10:58:19.814587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.134 [2024-11-19 10:58:19.814601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.134 [2024-11-19 10:58:19.814608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.134 [2024-11-19 10:58:19.814614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.134 [2024-11-19 10:58:19.814629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-11-19 10:58:19.824543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.134 [2024-11-19 10:58:19.824631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.134 [2024-11-19 10:58:19.824645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.134 [2024-11-19 10:58:19.824652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.134 [2024-11-19 10:58:19.824658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.134 [2024-11-19 10:58:19.824673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-11-19 10:58:19.834538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.134 [2024-11-19 10:58:19.834597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.134 [2024-11-19 10:58:19.834611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.134 [2024-11-19 10:58:19.834618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.134 [2024-11-19 10:58:19.834627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.134 [2024-11-19 10:58:19.834642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-11-19 10:58:19.844575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.134 [2024-11-19 10:58:19.844650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.134 [2024-11-19 10:58:19.844665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.134 [2024-11-19 10:58:19.844672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.134 [2024-11-19 10:58:19.844678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.134 [2024-11-19 10:58:19.844694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-11-19 10:58:19.854592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.134 [2024-11-19 10:58:19.854650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.134 [2024-11-19 10:58:19.854664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.134 [2024-11-19 10:58:19.854671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.134 [2024-11-19 10:58:19.854677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.134 [2024-11-19 10:58:19.854692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-11-19 10:58:19.864624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.134 [2024-11-19 10:58:19.864681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.134 [2024-11-19 10:58:19.864697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.134 [2024-11-19 10:58:19.864709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.134 [2024-11-19 10:58:19.864716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.134 [2024-11-19 10:58:19.864733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-11-19 10:58:19.874656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.134 [2024-11-19 10:58:19.874710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.134 [2024-11-19 10:58:19.874724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.134 [2024-11-19 10:58:19.874732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.134 [2024-11-19 10:58:19.874737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.134 [2024-11-19 10:58:19.874753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-11-19 10:58:19.884673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.134 [2024-11-19 10:58:19.884731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.134 [2024-11-19 10:58:19.884745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.134 [2024-11-19 10:58:19.884752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.134 [2024-11-19 10:58:19.884758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.135 [2024-11-19 10:58:19.884773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-11-19 10:58:19.894710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.135 [2024-11-19 10:58:19.894767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.135 [2024-11-19 10:58:19.894781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.135 [2024-11-19 10:58:19.894788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.135 [2024-11-19 10:58:19.894793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.135 [2024-11-19 10:58:19.894808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-11-19 10:58:19.904681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.135 [2024-11-19 10:58:19.904736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.135 [2024-11-19 10:58:19.904750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.135 [2024-11-19 10:58:19.904757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.135 [2024-11-19 10:58:19.904764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.135 [2024-11-19 10:58:19.904778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-11-19 10:58:19.914779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.135 [2024-11-19 10:58:19.914859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.135 [2024-11-19 10:58:19.914873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.135 [2024-11-19 10:58:19.914880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.135 [2024-11-19 10:58:19.914886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.135 [2024-11-19 10:58:19.914901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.394 [2024-11-19 10:58:19.924844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.394 [2024-11-19 10:58:19.924901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.394 [2024-11-19 10:58:19.924919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.394 [2024-11-19 10:58:19.924926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.394 [2024-11-19 10:58:19.924932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.394 [2024-11-19 10:58:19.924947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.394 qpair failed and we were unable to recover it. 00:30:30.394 [2024-11-19 10:58:19.934864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.394 [2024-11-19 10:58:19.934919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.394 [2024-11-19 10:58:19.934933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.394 [2024-11-19 10:58:19.934940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.394 [2024-11-19 10:58:19.934946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.394 [2024-11-19 10:58:19.934962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.394 qpair failed and we were unable to recover it. 00:30:30.394 [2024-11-19 10:58:19.944944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.394 [2024-11-19 10:58:19.945001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.394 [2024-11-19 10:58:19.945016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.394 [2024-11-19 10:58:19.945022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.394 [2024-11-19 10:58:19.945028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.394 [2024-11-19 10:58:19.945043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.394 qpair failed and we were unable to recover it. 00:30:30.394 [2024-11-19 10:58:19.954903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.394 [2024-11-19 10:58:19.954955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.394 [2024-11-19 10:58:19.954970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.394 [2024-11-19 10:58:19.954976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.394 [2024-11-19 10:58:19.954982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.394 [2024-11-19 10:58:19.954997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.394 qpair failed and we were unable to recover it. 00:30:30.394 [2024-11-19 10:58:19.964928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.394 [2024-11-19 10:58:19.964979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.394 [2024-11-19 10:58:19.964993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.394 [2024-11-19 10:58:19.965000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.394 [2024-11-19 10:58:19.965009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.394 [2024-11-19 10:58:19.965024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.394 qpair failed and we were unable to recover it. 00:30:30.394 [2024-11-19 10:58:19.974992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.394 [2024-11-19 10:58:19.975088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.394 [2024-11-19 10:58:19.975102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.394 [2024-11-19 10:58:19.975109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.394 [2024-11-19 10:58:19.975115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.394 [2024-11-19 10:58:19.975129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.394 qpair failed and we were unable to recover it. 00:30:30.394 [2024-11-19 10:58:19.984896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.394 [2024-11-19 10:58:19.984958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.394 [2024-11-19 10:58:19.984972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.394 [2024-11-19 10:58:19.984979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.394 [2024-11-19 10:58:19.984985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.394 [2024-11-19 10:58:19.985000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.394 qpair failed and we were unable to recover it. 00:30:30.394 [2024-11-19 10:58:19.995024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.394 [2024-11-19 10:58:19.995078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.394 [2024-11-19 10:58:19.995092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.394 [2024-11-19 10:58:19.995099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.394 [2024-11-19 10:58:19.995105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.394 [2024-11-19 10:58:19.995120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.394 qpair failed and we were unable to recover it. 00:30:30.394 [2024-11-19 10:58:20.005001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.005072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.395 [2024-11-19 10:58:20.005088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.395 [2024-11-19 10:58:20.005096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.395 [2024-11-19 10:58:20.005102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.395 [2024-11-19 10:58:20.005119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.395 qpair failed and we were unable to recover it. 00:30:30.395 [2024-11-19 10:58:20.015066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.015131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.395 [2024-11-19 10:58:20.015145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.395 [2024-11-19 10:58:20.015152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.395 [2024-11-19 10:58:20.015158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.395 [2024-11-19 10:58:20.015172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.395 qpair failed and we were unable to recover it. 00:30:30.395 [2024-11-19 10:58:20.025106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.025186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.395 [2024-11-19 10:58:20.025205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.395 [2024-11-19 10:58:20.025213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.395 [2024-11-19 10:58:20.025219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.395 [2024-11-19 10:58:20.025235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.395 qpair failed and we were unable to recover it. 00:30:30.395 [2024-11-19 10:58:20.035098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.035185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.395 [2024-11-19 10:58:20.035204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.395 [2024-11-19 10:58:20.035212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.395 [2024-11-19 10:58:20.035218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.395 [2024-11-19 10:58:20.035233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.395 qpair failed and we were unable to recover it. 00:30:30.395 [2024-11-19 10:58:20.045146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.045223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.395 [2024-11-19 10:58:20.045238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.395 [2024-11-19 10:58:20.045245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.395 [2024-11-19 10:58:20.045251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.395 [2024-11-19 10:58:20.045267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.395 qpair failed and we were unable to recover it. 00:30:30.395 [2024-11-19 10:58:20.055177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.055245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.395 [2024-11-19 10:58:20.055260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.395 [2024-11-19 10:58:20.055267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.395 [2024-11-19 10:58:20.055273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.395 [2024-11-19 10:58:20.055288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.395 qpair failed and we were unable to recover it. 00:30:30.395 [2024-11-19 10:58:20.065248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.065305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.395 [2024-11-19 10:58:20.065320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.395 [2024-11-19 10:58:20.065327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.395 [2024-11-19 10:58:20.065334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.395 [2024-11-19 10:58:20.065349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.395 qpair failed and we were unable to recover it. 00:30:30.395 [2024-11-19 10:58:20.075289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.075342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.395 [2024-11-19 10:58:20.075357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.395 [2024-11-19 10:58:20.075363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.395 [2024-11-19 10:58:20.075369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.395 [2024-11-19 10:58:20.075385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.395 qpair failed and we were unable to recover it. 00:30:30.395 [2024-11-19 10:58:20.085261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.085323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.395 [2024-11-19 10:58:20.085338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.395 [2024-11-19 10:58:20.085345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.395 [2024-11-19 10:58:20.085351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.395 [2024-11-19 10:58:20.085366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.395 qpair failed and we were unable to recover it. 00:30:30.395 [2024-11-19 10:58:20.095236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.095292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.395 [2024-11-19 10:58:20.095307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.395 [2024-11-19 10:58:20.095334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.395 [2024-11-19 10:58:20.095341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.395 [2024-11-19 10:58:20.095358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.395 qpair failed and we were unable to recover it. 00:30:30.395 [2024-11-19 10:58:20.105243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.105299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.395 [2024-11-19 10:58:20.105313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.395 [2024-11-19 10:58:20.105320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.395 [2024-11-19 10:58:20.105327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.395 [2024-11-19 10:58:20.105342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.395 qpair failed and we were unable to recover it. 00:30:30.395 [2024-11-19 10:58:20.115340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.395 [2024-11-19 10:58:20.115393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.396 [2024-11-19 10:58:20.115407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.396 [2024-11-19 10:58:20.115414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.396 [2024-11-19 10:58:20.115420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.396 [2024-11-19 10:58:20.115435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.396 qpair failed and we were unable to recover it. 00:30:30.396 [2024-11-19 10:58:20.125348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.396 [2024-11-19 10:58:20.125413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.396 [2024-11-19 10:58:20.125429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.396 [2024-11-19 10:58:20.125436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.396 [2024-11-19 10:58:20.125442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.396 [2024-11-19 10:58:20.125457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.396 qpair failed and we were unable to recover it. 00:30:30.396 [2024-11-19 10:58:20.135407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.396 [2024-11-19 10:58:20.135465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.396 [2024-11-19 10:58:20.135479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.396 [2024-11-19 10:58:20.135486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.396 [2024-11-19 10:58:20.135492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.396 [2024-11-19 10:58:20.135511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.396 qpair failed and we were unable to recover it. 00:30:30.396 [2024-11-19 10:58:20.145490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.396 [2024-11-19 10:58:20.145545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.396 [2024-11-19 10:58:20.145559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.396 [2024-11-19 10:58:20.145566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.396 [2024-11-19 10:58:20.145572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.396 [2024-11-19 10:58:20.145588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.396 qpair failed and we were unable to recover it. 00:30:30.396 [2024-11-19 10:58:20.155419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.396 [2024-11-19 10:58:20.155477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.396 [2024-11-19 10:58:20.155491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.396 [2024-11-19 10:58:20.155498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.396 [2024-11-19 10:58:20.155504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.396 [2024-11-19 10:58:20.155519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.396 qpair failed and we were unable to recover it. 00:30:30.396 [2024-11-19 10:58:20.165472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.396 [2024-11-19 10:58:20.165527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.396 [2024-11-19 10:58:20.165541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.396 [2024-11-19 10:58:20.165548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.396 [2024-11-19 10:58:20.165554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.396 [2024-11-19 10:58:20.165569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.396 qpair failed and we were unable to recover it. 00:30:30.396 [2024-11-19 10:58:20.175507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.396 [2024-11-19 10:58:20.175565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.396 [2024-11-19 10:58:20.175580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.396 [2024-11-19 10:58:20.175586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.396 [2024-11-19 10:58:20.175592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.396 [2024-11-19 10:58:20.175608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.396 qpair failed and we were unable to recover it. 00:30:30.654 [2024-11-19 10:58:20.185469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.654 [2024-11-19 10:58:20.185527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.654 [2024-11-19 10:58:20.185541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.654 [2024-11-19 10:58:20.185548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.654 [2024-11-19 10:58:20.185554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.654 [2024-11-19 10:58:20.185569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-11-19 10:58:20.195583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.654 [2024-11-19 10:58:20.195636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.654 [2024-11-19 10:58:20.195651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.654 [2024-11-19 10:58:20.195658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.654 [2024-11-19 10:58:20.195664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.654 [2024-11-19 10:58:20.195679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-11-19 10:58:20.205516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.654 [2024-11-19 10:58:20.205571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.654 [2024-11-19 10:58:20.205585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.654 [2024-11-19 10:58:20.205592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.654 [2024-11-19 10:58:20.205597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.654 [2024-11-19 10:58:20.205613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-11-19 10:58:20.215613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.654 [2024-11-19 10:58:20.215668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.654 [2024-11-19 10:58:20.215682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.654 [2024-11-19 10:58:20.215689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.654 [2024-11-19 10:58:20.215695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.654 [2024-11-19 10:58:20.215710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.654 qpair failed and we were unable to recover it. 00:30:30.654 [2024-11-19 10:58:20.225572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.225622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.225636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.225646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.225652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.225668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.235697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.235756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.235771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.235778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.235784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.235799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.245702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.245751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.245766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.245772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.245778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.245794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.255733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.255838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.255852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.255859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.255864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.255880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.265766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.265825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.265839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.265846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.265852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.265870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.275818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.275875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.275889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.275896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.275901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.275917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.285868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.285926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.285941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.285948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.285953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.285968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.295843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.295951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.295965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.295971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.295977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.295992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.305860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.305917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.305933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.305941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.305946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.305961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.315851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.315903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.315918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.315925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.315930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.315945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.325863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.325942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.325956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.325963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.325969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.325983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.335900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.655 [2024-11-19 10:58:20.335956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.655 [2024-11-19 10:58:20.335971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.655 [2024-11-19 10:58:20.335977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.655 [2024-11-19 10:58:20.335983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.655 [2024-11-19 10:58:20.335998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.655 qpair failed and we were unable to recover it. 00:30:30.655 [2024-11-19 10:58:20.345922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.656 [2024-11-19 10:58:20.345979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.656 [2024-11-19 10:58:20.345993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.656 [2024-11-19 10:58:20.346000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.656 [2024-11-19 10:58:20.346005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.656 [2024-11-19 10:58:20.346020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-11-19 10:58:20.356043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.656 [2024-11-19 10:58:20.356094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.656 [2024-11-19 10:58:20.356111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.656 [2024-11-19 10:58:20.356118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.656 [2024-11-19 10:58:20.356124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.656 [2024-11-19 10:58:20.356139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-11-19 10:58:20.366054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.656 [2024-11-19 10:58:20.366109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.656 [2024-11-19 10:58:20.366124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.656 [2024-11-19 10:58:20.366131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.656 [2024-11-19 10:58:20.366137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.656 [2024-11-19 10:58:20.366152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-11-19 10:58:20.376086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.656 [2024-11-19 10:58:20.376141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.656 [2024-11-19 10:58:20.376156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.656 [2024-11-19 10:58:20.376163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.656 [2024-11-19 10:58:20.376169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.656 [2024-11-19 10:58:20.376184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-11-19 10:58:20.386104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.656 [2024-11-19 10:58:20.386164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.656 [2024-11-19 10:58:20.386179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.656 [2024-11-19 10:58:20.386186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.656 [2024-11-19 10:58:20.386191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.656 [2024-11-19 10:58:20.386211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-11-19 10:58:20.396184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.656 [2024-11-19 10:58:20.396290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.656 [2024-11-19 10:58:20.396305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.656 [2024-11-19 10:58:20.396313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.656 [2024-11-19 10:58:20.396322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.656 [2024-11-19 10:58:20.396337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-11-19 10:58:20.406156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.656 [2024-11-19 10:58:20.406217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.656 [2024-11-19 10:58:20.406232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.656 [2024-11-19 10:58:20.406238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.656 [2024-11-19 10:58:20.406244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.656 [2024-11-19 10:58:20.406259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-11-19 10:58:20.416232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.656 [2024-11-19 10:58:20.416290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.656 [2024-11-19 10:58:20.416304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.656 [2024-11-19 10:58:20.416311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.656 [2024-11-19 10:58:20.416317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.656 [2024-11-19 10:58:20.416332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-11-19 10:58:20.426211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.656 [2024-11-19 10:58:20.426267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.656 [2024-11-19 10:58:20.426281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.656 [2024-11-19 10:58:20.426289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.656 [2024-11-19 10:58:20.426295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.656 [2024-11-19 10:58:20.426310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.656 [2024-11-19 10:58:20.436251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.656 [2024-11-19 10:58:20.436305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.656 [2024-11-19 10:58:20.436319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.656 [2024-11-19 10:58:20.436326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.656 [2024-11-19 10:58:20.436332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.656 [2024-11-19 10:58:20.436346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.656 qpair failed and we were unable to recover it. 00:30:30.914 [2024-11-19 10:58:20.446242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.914 [2024-11-19 10:58:20.446318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.914 [2024-11-19 10:58:20.446332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.914 [2024-11-19 10:58:20.446339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.914 [2024-11-19 10:58:20.446345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.914 [2024-11-19 10:58:20.446360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.914 qpair failed and we were unable to recover it. 00:30:30.914 [2024-11-19 10:58:20.456295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.914 [2024-11-19 10:58:20.456349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.914 [2024-11-19 10:58:20.456364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.914 [2024-11-19 10:58:20.456371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.914 [2024-11-19 10:58:20.456377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.914 [2024-11-19 10:58:20.456391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.466326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.466384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.466398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.466405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.466411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.466425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.476369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.476422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.476437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.476444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.476450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.476464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.486387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.486438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.486455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.486462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.486468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.486483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.496358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.496415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.496430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.496436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.496442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.496457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.506432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.506501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.506515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.506522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.506528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.506542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.516454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.516509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.516523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.516530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.516536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.516550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.526457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.526532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.526546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.526553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.526562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.526577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.536457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.536554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.536568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.536575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.536580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.536594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.546553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.546657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.546671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.546678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.546683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.546699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.556490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.556546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.556561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.556568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.556573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.556588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.566606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.566656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.566670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.566676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.566682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.566696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.576620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.576677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.576691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.576699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.576704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.915 [2024-11-19 10:58:20.576719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.915 qpair failed and we were unable to recover it. 00:30:30.915 [2024-11-19 10:58:20.586643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.915 [2024-11-19 10:58:20.586698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.915 [2024-11-19 10:58:20.586712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.915 [2024-11-19 10:58:20.586719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.915 [2024-11-19 10:58:20.586725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.586740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:30.916 [2024-11-19 10:58:20.596686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.916 [2024-11-19 10:58:20.596768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.916 [2024-11-19 10:58:20.596783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.916 [2024-11-19 10:58:20.596790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.916 [2024-11-19 10:58:20.596796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.596810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:30.916 [2024-11-19 10:58:20.606637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.916 [2024-11-19 10:58:20.606691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.916 [2024-11-19 10:58:20.606705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.916 [2024-11-19 10:58:20.606711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.916 [2024-11-19 10:58:20.606717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.606732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:30.916 [2024-11-19 10:58:20.616670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.916 [2024-11-19 10:58:20.616732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.916 [2024-11-19 10:58:20.616747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.916 [2024-11-19 10:58:20.616753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.916 [2024-11-19 10:58:20.616759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.616774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:30.916 [2024-11-19 10:58:20.626795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.916 [2024-11-19 10:58:20.626855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.916 [2024-11-19 10:58:20.626869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.916 [2024-11-19 10:58:20.626876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.916 [2024-11-19 10:58:20.626882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.626897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:30.916 [2024-11-19 10:58:20.636790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.916 [2024-11-19 10:58:20.636840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.916 [2024-11-19 10:58:20.636854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.916 [2024-11-19 10:58:20.636861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.916 [2024-11-19 10:58:20.636867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.636881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:30.916 [2024-11-19 10:58:20.646845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.916 [2024-11-19 10:58:20.646940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.916 [2024-11-19 10:58:20.646954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.916 [2024-11-19 10:58:20.646960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.916 [2024-11-19 10:58:20.646966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.646981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:30.916 [2024-11-19 10:58:20.656839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.916 [2024-11-19 10:58:20.656895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.916 [2024-11-19 10:58:20.656909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.916 [2024-11-19 10:58:20.656921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.916 [2024-11-19 10:58:20.656927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.656942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:30.916 [2024-11-19 10:58:20.666962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.916 [2024-11-19 10:58:20.667016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.916 [2024-11-19 10:58:20.667031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.916 [2024-11-19 10:58:20.667037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.916 [2024-11-19 10:58:20.667043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.667058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:30.916 [2024-11-19 10:58:20.676953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.916 [2024-11-19 10:58:20.677007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.916 [2024-11-19 10:58:20.677021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.916 [2024-11-19 10:58:20.677028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.916 [2024-11-19 10:58:20.677034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.677049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:30.916 [2024-11-19 10:58:20.686984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.916 [2024-11-19 10:58:20.687069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.916 [2024-11-19 10:58:20.687085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.916 [2024-11-19 10:58:20.687092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.916 [2024-11-19 10:58:20.687100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.687115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:30.916 [2024-11-19 10:58:20.696999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.916 [2024-11-19 10:58:20.697057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.916 [2024-11-19 10:58:20.697071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.916 [2024-11-19 10:58:20.697077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.916 [2024-11-19 10:58:20.697084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:30.916 [2024-11-19 10:58:20.697102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.916 qpair failed and we were unable to recover it. 00:30:31.174 [2024-11-19 10:58:20.706930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.174 [2024-11-19 10:58:20.707007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.174 [2024-11-19 10:58:20.707022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.174 [2024-11-19 10:58:20.707029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.174 [2024-11-19 10:58:20.707035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.174 [2024-11-19 10:58:20.707050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.174 qpair failed and we were unable to recover it. 00:30:31.174 [2024-11-19 10:58:20.717038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.174 [2024-11-19 10:58:20.717106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.174 [2024-11-19 10:58:20.717120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.174 [2024-11-19 10:58:20.717128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.174 [2024-11-19 10:58:20.717134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.174 [2024-11-19 10:58:20.717149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.174 qpair failed and we were unable to recover it. 00:30:31.174 [2024-11-19 10:58:20.727063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.174 [2024-11-19 10:58:20.727134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.174 [2024-11-19 10:58:20.727150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.174 [2024-11-19 10:58:20.727157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.174 [2024-11-19 10:58:20.727163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.174 [2024-11-19 10:58:20.727179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.174 qpair failed and we were unable to recover it. 00:30:31.174 [2024-11-19 10:58:20.737087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.174 [2024-11-19 10:58:20.737168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.174 [2024-11-19 10:58:20.737184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.174 [2024-11-19 10:58:20.737191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.174 [2024-11-19 10:58:20.737197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.174 [2024-11-19 10:58:20.737218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.174 qpair failed and we were unable to recover it. 00:30:31.174 [2024-11-19 10:58:20.747116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.174 [2024-11-19 10:58:20.747176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.174 [2024-11-19 10:58:20.747190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.174 [2024-11-19 10:58:20.747197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.174 [2024-11-19 10:58:20.747209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.174 [2024-11-19 10:58:20.747228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.174 qpair failed and we were unable to recover it. 00:30:31.174 [2024-11-19 10:58:20.757146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.174 [2024-11-19 10:58:20.757200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.174 [2024-11-19 10:58:20.757218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.174 [2024-11-19 10:58:20.757224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.174 [2024-11-19 10:58:20.757231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.174 [2024-11-19 10:58:20.757246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.174 qpair failed and we were unable to recover it. 00:30:31.174 [2024-11-19 10:58:20.767172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.767230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.767245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.767253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.767259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.767276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.777215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.777284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.777298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.777306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.777312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.777328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.787206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.787265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.787283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.787291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.787298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.787314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.797309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.797368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.797382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.797389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.797396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.797412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.807330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.807388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.807402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.807410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.807416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.807432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.817317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.817381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.817395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.817403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.817409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.817424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.827335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.827391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.827404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.827411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.827418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.827436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.837375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.837459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.837475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.837482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.837488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.837503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.847410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.847463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.847478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.847485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.847491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.847507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.857436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.857492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.857507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.857514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.857521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.857536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.867492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.867548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.867563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.867570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.867576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.867592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.877418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.877480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.877494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.877502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.877508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.877523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.175 [2024-11-19 10:58:20.887543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.175 [2024-11-19 10:58:20.887635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.175 [2024-11-19 10:58:20.887650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.175 [2024-11-19 10:58:20.887657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.175 [2024-11-19 10:58:20.887664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.175 [2024-11-19 10:58:20.887679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.175 qpair failed and we were unable to recover it. 00:30:31.176 [2024-11-19 10:58:20.897500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.176 [2024-11-19 10:58:20.897579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.176 [2024-11-19 10:58:20.897594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.176 [2024-11-19 10:58:20.897602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.176 [2024-11-19 10:58:20.897608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.176 [2024-11-19 10:58:20.897623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.176 qpair failed and we were unable to recover it. 00:30:31.176 [2024-11-19 10:58:20.907573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.176 [2024-11-19 10:58:20.907626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.176 [2024-11-19 10:58:20.907641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.176 [2024-11-19 10:58:20.907648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.176 [2024-11-19 10:58:20.907655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.176 [2024-11-19 10:58:20.907671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.176 qpair failed and we were unable to recover it. 00:30:31.176 [2024-11-19 10:58:20.917593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.176 [2024-11-19 10:58:20.917651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.176 [2024-11-19 10:58:20.917670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.176 [2024-11-19 10:58:20.917678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.176 [2024-11-19 10:58:20.917684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.176 [2024-11-19 10:58:20.917700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.176 qpair failed and we were unable to recover it. 00:30:31.176 [2024-11-19 10:58:20.927614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.176 [2024-11-19 10:58:20.927662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.176 [2024-11-19 10:58:20.927677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.176 [2024-11-19 10:58:20.927684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.176 [2024-11-19 10:58:20.927690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.176 [2024-11-19 10:58:20.927706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.176 qpair failed and we were unable to recover it. 00:30:31.176 [2024-11-19 10:58:20.937703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.176 [2024-11-19 10:58:20.937756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.176 [2024-11-19 10:58:20.937771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.176 [2024-11-19 10:58:20.937778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.176 [2024-11-19 10:58:20.937785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.176 [2024-11-19 10:58:20.937801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.176 qpair failed and we were unable to recover it. 00:30:31.176 [2024-11-19 10:58:20.947674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.176 [2024-11-19 10:58:20.947726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.176 [2024-11-19 10:58:20.947740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.176 [2024-11-19 10:58:20.947747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.176 [2024-11-19 10:58:20.947752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.176 [2024-11-19 10:58:20.947768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.176 qpair failed and we were unable to recover it. 00:30:31.176 [2024-11-19 10:58:20.957727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.176 [2024-11-19 10:58:20.957781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.176 [2024-11-19 10:58:20.957795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.176 [2024-11-19 10:58:20.957802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.176 [2024-11-19 10:58:20.957811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.176 [2024-11-19 10:58:20.957827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.176 qpair failed and we were unable to recover it. 00:30:31.434 [2024-11-19 10:58:20.967744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.434 [2024-11-19 10:58:20.967828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.434 [2024-11-19 10:58:20.967843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.434 [2024-11-19 10:58:20.967850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.434 [2024-11-19 10:58:20.967856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.434 [2024-11-19 10:58:20.967871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.434 qpair failed and we were unable to recover it. 00:30:31.434 [2024-11-19 10:58:20.977749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.434 [2024-11-19 10:58:20.977807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.434 [2024-11-19 10:58:20.977822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.434 [2024-11-19 10:58:20.977830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.434 [2024-11-19 10:58:20.977836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.434 [2024-11-19 10:58:20.977850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.434 qpair failed and we were unable to recover it. 00:30:31.434 [2024-11-19 10:58:20.987807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.434 [2024-11-19 10:58:20.987864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.434 [2024-11-19 10:58:20.987878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.434 [2024-11-19 10:58:20.987885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.434 [2024-11-19 10:58:20.987891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.434 [2024-11-19 10:58:20.987907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.434 qpair failed and we were unable to recover it. 00:30:31.434 [2024-11-19 10:58:20.997894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.434 [2024-11-19 10:58:20.998002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.434 [2024-11-19 10:58:20.998017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.434 [2024-11-19 10:58:20.998025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:20.998032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:20.998048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.007859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.007911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.007926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.007934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.007941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.007956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.017950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.018007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.018022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.018029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.018036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.018052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.027905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.027960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.027975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.027983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.027989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.028005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.037990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.038051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.038066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.038073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.038079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.038094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.047966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.048021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.048039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.048046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.048052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.048067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.058006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.058061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.058075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.058083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.058089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.058104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.067961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.068020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.068034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.068042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.068048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.068063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.078107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.078173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.078187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.078194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.078204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.078220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.088093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.088150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.088164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.088174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.088180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.088196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.098128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.098187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.098206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.098214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.098220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.098236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.108147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.108206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.108221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.108229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.108235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.108250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.118182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.118242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.118257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.435 [2024-11-19 10:58:21.118264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.435 [2024-11-19 10:58:21.118270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.435 [2024-11-19 10:58:21.118286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.435 qpair failed and we were unable to recover it. 00:30:31.435 [2024-11-19 10:58:21.128227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.435 [2024-11-19 10:58:21.128283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.435 [2024-11-19 10:58:21.128297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.436 [2024-11-19 10:58:21.128304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.436 [2024-11-19 10:58:21.128311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.436 [2024-11-19 10:58:21.128326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.436 qpair failed and we were unable to recover it. 00:30:31.436 [2024-11-19 10:58:21.138164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.436 [2024-11-19 10:58:21.138223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.436 [2024-11-19 10:58:21.138237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.436 [2024-11-19 10:58:21.138245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.436 [2024-11-19 10:58:21.138251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.436 [2024-11-19 10:58:21.138266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.436 qpair failed and we were unable to recover it. 00:30:31.436 [2024-11-19 10:58:21.148265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.436 [2024-11-19 10:58:21.148324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.436 [2024-11-19 10:58:21.148338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.436 [2024-11-19 10:58:21.148347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.436 [2024-11-19 10:58:21.148353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.436 [2024-11-19 10:58:21.148368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.436 qpair failed and we were unable to recover it. 00:30:31.436 [2024-11-19 10:58:21.158297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.436 [2024-11-19 10:58:21.158361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.436 [2024-11-19 10:58:21.158374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.436 [2024-11-19 10:58:21.158381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.436 [2024-11-19 10:58:21.158388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.436 [2024-11-19 10:58:21.158403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.436 qpair failed and we were unable to recover it. 00:30:31.436 [2024-11-19 10:58:21.168325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.436 [2024-11-19 10:58:21.168382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.436 [2024-11-19 10:58:21.168396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.436 [2024-11-19 10:58:21.168403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.436 [2024-11-19 10:58:21.168410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.436 [2024-11-19 10:58:21.168425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.436 qpair failed and we were unable to recover it. 00:30:31.436 [2024-11-19 10:58:21.178375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.436 [2024-11-19 10:58:21.178485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.436 [2024-11-19 10:58:21.178500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.436 [2024-11-19 10:58:21.178507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.436 [2024-11-19 10:58:21.178513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.436 [2024-11-19 10:58:21.178528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.436 qpair failed and we were unable to recover it. 00:30:31.436 [2024-11-19 10:58:21.188424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.436 [2024-11-19 10:58:21.188484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.436 [2024-11-19 10:58:21.188498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.436 [2024-11-19 10:58:21.188507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.436 [2024-11-19 10:58:21.188514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.436 [2024-11-19 10:58:21.188528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.436 qpair failed and we were unable to recover it. 00:30:31.436 [2024-11-19 10:58:21.198401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.436 [2024-11-19 10:58:21.198483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.436 [2024-11-19 10:58:21.198498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.436 [2024-11-19 10:58:21.198505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.436 [2024-11-19 10:58:21.198512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.436 [2024-11-19 10:58:21.198527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.436 qpair failed and we were unable to recover it. 00:30:31.436 [2024-11-19 10:58:21.208434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.436 [2024-11-19 10:58:21.208488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.436 [2024-11-19 10:58:21.208502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.436 [2024-11-19 10:58:21.208509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.436 [2024-11-19 10:58:21.208515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.436 [2024-11-19 10:58:21.208530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.436 qpair failed and we were unable to recover it. 00:30:31.436 [2024-11-19 10:58:21.218465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.436 [2024-11-19 10:58:21.218523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.436 [2024-11-19 10:58:21.218537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.436 [2024-11-19 10:58:21.218548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.436 [2024-11-19 10:58:21.218555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.436 [2024-11-19 10:58:21.218569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.436 qpair failed and we were unable to recover it. 00:30:31.696 [2024-11-19 10:58:21.228524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.696 [2024-11-19 10:58:21.228581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.696 [2024-11-19 10:58:21.228596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.228604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.228610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.228626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.238566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.697 [2024-11-19 10:58:21.238619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.697 [2024-11-19 10:58:21.238633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.238640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.238647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.238662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.248567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.697 [2024-11-19 10:58:21.248624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.697 [2024-11-19 10:58:21.248638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.248645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.248651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.248666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.258635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.697 [2024-11-19 10:58:21.258736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.697 [2024-11-19 10:58:21.258758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.258766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.258773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.258796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.268617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.697 [2024-11-19 10:58:21.268674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.697 [2024-11-19 10:58:21.268689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.268696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.268703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.268718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.278648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.697 [2024-11-19 10:58:21.278720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.697 [2024-11-19 10:58:21.278735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.278741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.278747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.278763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.288603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.697 [2024-11-19 10:58:21.288654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.697 [2024-11-19 10:58:21.288669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.288676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.288684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.288699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.298711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.697 [2024-11-19 10:58:21.298776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.697 [2024-11-19 10:58:21.298790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.298797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.298803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.298819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.308727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.697 [2024-11-19 10:58:21.308794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.697 [2024-11-19 10:58:21.308808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.308816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.308822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.308836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.318768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.697 [2024-11-19 10:58:21.318841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.697 [2024-11-19 10:58:21.318855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.318862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.318868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.318883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.328781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.697 [2024-11-19 10:58:21.328873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.697 [2024-11-19 10:58:21.328887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.328894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.328900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.328917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.338840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.697 [2024-11-19 10:58:21.338896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.697 [2024-11-19 10:58:21.338910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.697 [2024-11-19 10:58:21.338918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.697 [2024-11-19 10:58:21.338924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.697 [2024-11-19 10:58:21.338940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.697 qpair failed and we were unable to recover it. 00:30:31.697 [2024-11-19 10:58:21.348842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.698 [2024-11-19 10:58:21.348895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.698 [2024-11-19 10:58:21.348912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.698 [2024-11-19 10:58:21.348919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.698 [2024-11-19 10:58:21.348925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.698 [2024-11-19 10:58:21.348940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.698 qpair failed and we were unable to recover it. 00:30:31.698 [2024-11-19 10:58:21.358867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.698 [2024-11-19 10:58:21.358933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.698 [2024-11-19 10:58:21.358948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.698 [2024-11-19 10:58:21.358955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.698 [2024-11-19 10:58:21.358962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.698 [2024-11-19 10:58:21.358978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.698 qpair failed and we were unable to recover it. 00:30:31.698 [2024-11-19 10:58:21.368888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.698 [2024-11-19 10:58:21.368953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.698 [2024-11-19 10:58:21.368967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.698 [2024-11-19 10:58:21.368974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.698 [2024-11-19 10:58:21.368980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.698 [2024-11-19 10:58:21.368995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.698 qpair failed and we were unable to recover it. 00:30:31.698 [2024-11-19 10:58:21.378927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.698 [2024-11-19 10:58:21.378993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.698 [2024-11-19 10:58:21.379007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.698 [2024-11-19 10:58:21.379014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.698 [2024-11-19 10:58:21.379021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.698 [2024-11-19 10:58:21.379036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.698 qpair failed and we were unable to recover it. 00:30:31.698 [2024-11-19 10:58:21.388989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.698 [2024-11-19 10:58:21.389046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.698 [2024-11-19 10:58:21.389061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.698 [2024-11-19 10:58:21.389068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.698 [2024-11-19 10:58:21.389074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.698 [2024-11-19 10:58:21.389093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.698 qpair failed and we were unable to recover it. 00:30:31.698 [2024-11-19 10:58:21.398986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.698 [2024-11-19 10:58:21.399050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.698 [2024-11-19 10:58:21.399064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.698 [2024-11-19 10:58:21.399072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.698 [2024-11-19 10:58:21.399078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.698 [2024-11-19 10:58:21.399093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.698 qpair failed and we were unable to recover it. 00:30:31.698 [2024-11-19 10:58:21.409007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.698 [2024-11-19 10:58:21.409094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.698 [2024-11-19 10:58:21.409111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.698 [2024-11-19 10:58:21.409118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.698 [2024-11-19 10:58:21.409125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.698 [2024-11-19 10:58:21.409140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.698 qpair failed and we were unable to recover it. 00:30:31.698 [2024-11-19 10:58:21.419033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.698 [2024-11-19 10:58:21.419096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.698 [2024-11-19 10:58:21.419111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.698 [2024-11-19 10:58:21.419118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.698 [2024-11-19 10:58:21.419124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.698 [2024-11-19 10:58:21.419139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.698 qpair failed and we were unable to recover it. 00:30:31.698 [2024-11-19 10:58:21.429095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.698 [2024-11-19 10:58:21.429194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.698 [2024-11-19 10:58:21.429211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.698 [2024-11-19 10:58:21.429219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.698 [2024-11-19 10:58:21.429225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.698 [2024-11-19 10:58:21.429240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.698 qpair failed and we were unable to recover it. 00:30:31.698 [2024-11-19 10:58:21.439123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.698 [2024-11-19 10:58:21.439178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.698 [2024-11-19 10:58:21.439192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.698 [2024-11-19 10:58:21.439199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.698 [2024-11-19 10:58:21.439210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.698 [2024-11-19 10:58:21.439226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.698 qpair failed and we were unable to recover it. 00:30:31.698 [2024-11-19 10:58:21.449119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.699 [2024-11-19 10:58:21.449172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.699 [2024-11-19 10:58:21.449187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.699 [2024-11-19 10:58:21.449194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.699 [2024-11-19 10:58:21.449200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.699 [2024-11-19 10:58:21.449220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.699 qpair failed and we were unable to recover it. 00:30:31.699 [2024-11-19 10:58:21.459152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.699 [2024-11-19 10:58:21.459222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.699 [2024-11-19 10:58:21.459236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.699 [2024-11-19 10:58:21.459244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.699 [2024-11-19 10:58:21.459250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.699 [2024-11-19 10:58:21.459265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.699 qpair failed and we were unable to recover it. 00:30:31.699 [2024-11-19 10:58:21.469208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.699 [2024-11-19 10:58:21.469314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.699 [2024-11-19 10:58:21.469328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.699 [2024-11-19 10:58:21.469335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.699 [2024-11-19 10:58:21.469342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.699 [2024-11-19 10:58:21.469357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.699 qpair failed and we were unable to recover it. 00:30:31.699 [2024-11-19 10:58:21.479225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.699 [2024-11-19 10:58:21.479293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.699 [2024-11-19 10:58:21.479311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.699 [2024-11-19 10:58:21.479318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.699 [2024-11-19 10:58:21.479324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.699 [2024-11-19 10:58:21.479340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.699 qpair failed and we were unable to recover it. 00:30:31.960 [2024-11-19 10:58:21.489166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.960 [2024-11-19 10:58:21.489222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.960 [2024-11-19 10:58:21.489237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.960 [2024-11-19 10:58:21.489244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.960 [2024-11-19 10:58:21.489251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.960 [2024-11-19 10:58:21.489266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-11-19 10:58:21.499277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.960 [2024-11-19 10:58:21.499350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.960 [2024-11-19 10:58:21.499365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.960 [2024-11-19 10:58:21.499372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.960 [2024-11-19 10:58:21.499378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.960 [2024-11-19 10:58:21.499394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-11-19 10:58:21.509298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.960 [2024-11-19 10:58:21.509363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.960 [2024-11-19 10:58:21.509377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.960 [2024-11-19 10:58:21.509385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.960 [2024-11-19 10:58:21.509391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.960 [2024-11-19 10:58:21.509407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-11-19 10:58:21.519331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.960 [2024-11-19 10:58:21.519396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.960 [2024-11-19 10:58:21.519410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.960 [2024-11-19 10:58:21.519417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.960 [2024-11-19 10:58:21.519427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.960 [2024-11-19 10:58:21.519442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-11-19 10:58:21.529353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.960 [2024-11-19 10:58:21.529410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.961 [2024-11-19 10:58:21.529424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.961 [2024-11-19 10:58:21.529431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.961 [2024-11-19 10:58:21.529438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.961 [2024-11-19 10:58:21.529453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-11-19 10:58:21.539389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.961 [2024-11-19 10:58:21.539441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.961 [2024-11-19 10:58:21.539458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.961 [2024-11-19 10:58:21.539465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.961 [2024-11-19 10:58:21.539472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.961 [2024-11-19 10:58:21.539488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-11-19 10:58:21.549424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.961 [2024-11-19 10:58:21.549485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.961 [2024-11-19 10:58:21.549499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.961 [2024-11-19 10:58:21.549506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.961 [2024-11-19 10:58:21.549513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.961 [2024-11-19 10:58:21.549528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-11-19 10:58:21.559478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.961 [2024-11-19 10:58:21.559530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.961 [2024-11-19 10:58:21.559544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.961 [2024-11-19 10:58:21.559551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.961 [2024-11-19 10:58:21.559557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.961 [2024-11-19 10:58:21.559573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-11-19 10:58:21.569491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.961 [2024-11-19 10:58:21.569554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.961 [2024-11-19 10:58:21.569568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.961 [2024-11-19 10:58:21.569576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.961 [2024-11-19 10:58:21.569582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.961 [2024-11-19 10:58:21.569597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-11-19 10:58:21.579508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.961 [2024-11-19 10:58:21.579560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.961 [2024-11-19 10:58:21.579574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.961 [2024-11-19 10:58:21.579581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.961 [2024-11-19 10:58:21.579587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.961 [2024-11-19 10:58:21.579603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-11-19 10:58:21.589540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.961 [2024-11-19 10:58:21.589600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.961 [2024-11-19 10:58:21.589614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.961 [2024-11-19 10:58:21.589622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.961 [2024-11-19 10:58:21.589628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.961 [2024-11-19 10:58:21.589643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-11-19 10:58:21.599590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.961 [2024-11-19 10:58:21.599650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.961 [2024-11-19 10:58:21.599664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.961 [2024-11-19 10:58:21.599671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.961 [2024-11-19 10:58:21.599677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.961 [2024-11-19 10:58:21.599692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-11-19 10:58:21.609600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.961 [2024-11-19 10:58:21.609658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.961 [2024-11-19 10:58:21.609676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.961 [2024-11-19 10:58:21.609684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.961 [2024-11-19 10:58:21.609691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.961 [2024-11-19 10:58:21.609706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-11-19 10:58:21.619634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.961 [2024-11-19 10:58:21.619706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.961 [2024-11-19 10:58:21.619720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.961 [2024-11-19 10:58:21.619727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.961 [2024-11-19 10:58:21.619734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.961 [2024-11-19 10:58:21.619749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-11-19 10:58:21.629683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.961 [2024-11-19 10:58:21.629753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.961 [2024-11-19 10:58:21.629767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.961 [2024-11-19 10:58:21.629774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.961 [2024-11-19 10:58:21.629780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.961 [2024-11-19 10:58:21.629796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-11-19 10:58:21.639682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.962 [2024-11-19 10:58:21.639735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.962 [2024-11-19 10:58:21.639749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.962 [2024-11-19 10:58:21.639756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.962 [2024-11-19 10:58:21.639763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.962 [2024-11-19 10:58:21.639778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-11-19 10:58:21.649628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.962 [2024-11-19 10:58:21.649678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.962 [2024-11-19 10:58:21.649693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.962 [2024-11-19 10:58:21.649702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.962 [2024-11-19 10:58:21.649709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.962 [2024-11-19 10:58:21.649724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-11-19 10:58:21.659745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.962 [2024-11-19 10:58:21.659829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.962 [2024-11-19 10:58:21.659845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.962 [2024-11-19 10:58:21.659852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.962 [2024-11-19 10:58:21.659858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.962 [2024-11-19 10:58:21.659872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-11-19 10:58:21.669759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.962 [2024-11-19 10:58:21.669816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.962 [2024-11-19 10:58:21.669831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.962 [2024-11-19 10:58:21.669837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.962 [2024-11-19 10:58:21.669845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.962 [2024-11-19 10:58:21.669861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-11-19 10:58:21.679807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.962 [2024-11-19 10:58:21.679888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.962 [2024-11-19 10:58:21.679903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.962 [2024-11-19 10:58:21.679910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.962 [2024-11-19 10:58:21.679917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.962 [2024-11-19 10:58:21.679932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-11-19 10:58:21.689821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.962 [2024-11-19 10:58:21.689876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.962 [2024-11-19 10:58:21.689891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.962 [2024-11-19 10:58:21.689899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.962 [2024-11-19 10:58:21.689906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.962 [2024-11-19 10:58:21.689921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-11-19 10:58:21.699846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.962 [2024-11-19 10:58:21.699911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.962 [2024-11-19 10:58:21.699928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.962 [2024-11-19 10:58:21.699936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.962 [2024-11-19 10:58:21.699945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.962 [2024-11-19 10:58:21.699961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-11-19 10:58:21.709807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.962 [2024-11-19 10:58:21.709860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.962 [2024-11-19 10:58:21.709877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.962 [2024-11-19 10:58:21.709884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.962 [2024-11-19 10:58:21.709891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.962 [2024-11-19 10:58:21.709907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-11-19 10:58:21.719902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.962 [2024-11-19 10:58:21.719993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.962 [2024-11-19 10:58:21.720010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.962 [2024-11-19 10:58:21.720017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.962 [2024-11-19 10:58:21.720024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.962 [2024-11-19 10:58:21.720041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-11-19 10:58:21.729928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.962 [2024-11-19 10:58:21.730021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.962 [2024-11-19 10:58:21.730037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.962 [2024-11-19 10:58:21.730045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.962 [2024-11-19 10:58:21.730053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.962 [2024-11-19 10:58:21.730069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-11-19 10:58:21.739991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.962 [2024-11-19 10:58:21.740088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.962 [2024-11-19 10:58:21.740105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.962 [2024-11-19 10:58:21.740114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.962 [2024-11-19 10:58:21.740121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:31.962 [2024-11-19 10:58:21.740137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.962 qpair failed and we were unable to recover it. 00:30:32.223 [2024-11-19 10:58:21.750025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.223 [2024-11-19 10:58:21.750096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.223 [2024-11-19 10:58:21.750111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.223 [2024-11-19 10:58:21.750119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.223 [2024-11-19 10:58:21.750125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.223 [2024-11-19 10:58:21.750140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.223 qpair failed and we were unable to recover it. 00:30:32.223 [2024-11-19 10:58:21.760057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.223 [2024-11-19 10:58:21.760123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.223 [2024-11-19 10:58:21.760138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.223 [2024-11-19 10:58:21.760146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.223 [2024-11-19 10:58:21.760152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.223 [2024-11-19 10:58:21.760167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.223 qpair failed and we were unable to recover it. 00:30:32.223 [2024-11-19 10:58:21.769976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.223 [2024-11-19 10:58:21.770025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.223 [2024-11-19 10:58:21.770040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.223 [2024-11-19 10:58:21.770047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.223 [2024-11-19 10:58:21.770053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.223 [2024-11-19 10:58:21.770068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.223 qpair failed and we were unable to recover it. 00:30:32.223 [2024-11-19 10:58:21.779991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.223 [2024-11-19 10:58:21.780052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.223 [2024-11-19 10:58:21.780066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.780076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.780083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.780098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.224 qpair failed and we were unable to recover it. 00:30:32.224 [2024-11-19 10:58:21.790126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.224 [2024-11-19 10:58:21.790213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.224 [2024-11-19 10:58:21.790228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.790236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.790242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.790257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.224 qpair failed and we were unable to recover it. 00:30:32.224 [2024-11-19 10:58:21.800054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.224 [2024-11-19 10:58:21.800112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.224 [2024-11-19 10:58:21.800126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.800134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.800141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.800157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.224 qpair failed and we were unable to recover it. 00:30:32.224 [2024-11-19 10:58:21.810079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.224 [2024-11-19 10:58:21.810132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.224 [2024-11-19 10:58:21.810146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.810153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.810160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.810176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.224 qpair failed and we were unable to recover it. 00:30:32.224 [2024-11-19 10:58:21.820106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.224 [2024-11-19 10:58:21.820162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.224 [2024-11-19 10:58:21.820176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.820183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.820189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.820211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.224 qpair failed and we were unable to recover it. 00:30:32.224 [2024-11-19 10:58:21.830200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.224 [2024-11-19 10:58:21.830262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.224 [2024-11-19 10:58:21.830276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.830284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.830290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.830305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.224 qpair failed and we were unable to recover it. 00:30:32.224 [2024-11-19 10:58:21.840174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.224 [2024-11-19 10:58:21.840274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.224 [2024-11-19 10:58:21.840289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.840296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.840302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.840317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.224 qpair failed and we were unable to recover it. 00:30:32.224 [2024-11-19 10:58:21.850252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.224 [2024-11-19 10:58:21.850309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.224 [2024-11-19 10:58:21.850325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.850333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.850339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.850355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.224 qpair failed and we were unable to recover it. 00:30:32.224 [2024-11-19 10:58:21.860241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.224 [2024-11-19 10:58:21.860296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.224 [2024-11-19 10:58:21.860313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.860322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.860329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.860345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.224 qpair failed and we were unable to recover it. 00:30:32.224 [2024-11-19 10:58:21.870273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.224 [2024-11-19 10:58:21.870356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.224 [2024-11-19 10:58:21.870371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.870378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.870384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.870399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.224 qpair failed and we were unable to recover it. 00:30:32.224 [2024-11-19 10:58:21.880392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.224 [2024-11-19 10:58:21.880450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.224 [2024-11-19 10:58:21.880464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.880472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.880478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.880493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.224 qpair failed and we were unable to recover it. 00:30:32.224 [2024-11-19 10:58:21.890315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.224 [2024-11-19 10:58:21.890373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.224 [2024-11-19 10:58:21.890387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.224 [2024-11-19 10:58:21.890395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.224 [2024-11-19 10:58:21.890401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.224 [2024-11-19 10:58:21.890417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.225 qpair failed and we were unable to recover it. 00:30:32.225 [2024-11-19 10:58:21.900384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.225 [2024-11-19 10:58:21.900442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.225 [2024-11-19 10:58:21.900458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.225 [2024-11-19 10:58:21.900466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.225 [2024-11-19 10:58:21.900473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.225 [2024-11-19 10:58:21.900489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.225 qpair failed and we were unable to recover it. 00:30:32.225 [2024-11-19 10:58:21.910491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.225 [2024-11-19 10:58:21.910551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.225 [2024-11-19 10:58:21.910568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.225 [2024-11-19 10:58:21.910576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.225 [2024-11-19 10:58:21.910582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.225 [2024-11-19 10:58:21.910597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.225 qpair failed and we were unable to recover it. 00:30:32.225 [2024-11-19 10:58:21.920391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.225 [2024-11-19 10:58:21.920455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.225 [2024-11-19 10:58:21.920470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.225 [2024-11-19 10:58:21.920477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.225 [2024-11-19 10:58:21.920484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.225 [2024-11-19 10:58:21.920500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.225 qpair failed and we were unable to recover it. 00:30:32.225 [2024-11-19 10:58:21.930443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.225 [2024-11-19 10:58:21.930517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.225 [2024-11-19 10:58:21.930533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.225 [2024-11-19 10:58:21.930541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.225 [2024-11-19 10:58:21.930547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.225 [2024-11-19 10:58:21.930563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.225 qpair failed and we were unable to recover it. 00:30:32.225 [2024-11-19 10:58:21.940472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.225 [2024-11-19 10:58:21.940529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.225 [2024-11-19 10:58:21.940543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.225 [2024-11-19 10:58:21.940550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.225 [2024-11-19 10:58:21.940557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.225 [2024-11-19 10:58:21.940573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.225 qpair failed and we were unable to recover it. 00:30:32.225 [2024-11-19 10:58:21.950546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.225 [2024-11-19 10:58:21.950598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.225 [2024-11-19 10:58:21.950612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.225 [2024-11-19 10:58:21.950620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.225 [2024-11-19 10:58:21.950630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.225 [2024-11-19 10:58:21.950646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.225 qpair failed and we were unable to recover it. 00:30:32.225 [2024-11-19 10:58:21.960512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.225 [2024-11-19 10:58:21.960567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.225 [2024-11-19 10:58:21.960582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.225 [2024-11-19 10:58:21.960589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.225 [2024-11-19 10:58:21.960595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.225 [2024-11-19 10:58:21.960610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.225 qpair failed and we were unable to recover it. 00:30:32.225 [2024-11-19 10:58:21.970609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.225 [2024-11-19 10:58:21.970691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.225 [2024-11-19 10:58:21.970705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.225 [2024-11-19 10:58:21.970714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.225 [2024-11-19 10:58:21.970720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.225 [2024-11-19 10:58:21.970737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.225 qpair failed and we were unable to recover it. 00:30:32.225 [2024-11-19 10:58:21.980585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.225 [2024-11-19 10:58:21.980639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.225 [2024-11-19 10:58:21.980654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.225 [2024-11-19 10:58:21.980661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.225 [2024-11-19 10:58:21.980667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.225 [2024-11-19 10:58:21.980683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.225 qpair failed and we were unable to recover it. 00:30:32.225 [2024-11-19 10:58:21.990602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.225 [2024-11-19 10:58:21.990659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.225 [2024-11-19 10:58:21.990673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.225 [2024-11-19 10:58:21.990680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.225 [2024-11-19 10:58:21.990687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.225 [2024-11-19 10:58:21.990702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.225 qpair failed and we were unable to recover it. 00:30:32.226 [2024-11-19 10:58:22.000699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.226 [2024-11-19 10:58:22.000752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.226 [2024-11-19 10:58:22.000767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.226 [2024-11-19 10:58:22.000774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.226 [2024-11-19 10:58:22.000780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.226 [2024-11-19 10:58:22.000796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.226 qpair failed and we were unable to recover it. 00:30:32.226 [2024-11-19 10:58:22.010733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.226 [2024-11-19 10:58:22.010789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.226 [2024-11-19 10:58:22.010804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.226 [2024-11-19 10:58:22.010813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.226 [2024-11-19 10:58:22.010819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.226 [2024-11-19 10:58:22.010834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.226 qpair failed and we were unable to recover it. 00:30:32.486 [2024-11-19 10:58:22.020685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.486 [2024-11-19 10:58:22.020739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.486 [2024-11-19 10:58:22.020753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.486 [2024-11-19 10:58:22.020760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.486 [2024-11-19 10:58:22.020767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.486 [2024-11-19 10:58:22.020783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.486 qpair failed and we were unable to recover it. 00:30:32.486 [2024-11-19 10:58:22.030799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.486 [2024-11-19 10:58:22.030853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.486 [2024-11-19 10:58:22.030868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.486 [2024-11-19 10:58:22.030876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.486 [2024-11-19 10:58:22.030882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.486 [2024-11-19 10:58:22.030897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.486 qpair failed and we were unable to recover it. 00:30:32.486 [2024-11-19 10:58:22.040808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.486 [2024-11-19 10:58:22.040860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.486 [2024-11-19 10:58:22.040877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.486 [2024-11-19 10:58:22.040885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.486 [2024-11-19 10:58:22.040891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.486 [2024-11-19 10:58:22.040907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.486 qpair failed and we were unable to recover it. 00:30:32.486 [2024-11-19 10:58:22.050775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.486 [2024-11-19 10:58:22.050829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.486 [2024-11-19 10:58:22.050843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.486 [2024-11-19 10:58:22.050850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.486 [2024-11-19 10:58:22.050856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.486 [2024-11-19 10:58:22.050872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.486 qpair failed and we were unable to recover it. 00:30:32.486 [2024-11-19 10:58:22.060815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.486 [2024-11-19 10:58:22.060917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.486 [2024-11-19 10:58:22.060932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.486 [2024-11-19 10:58:22.060939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.486 [2024-11-19 10:58:22.060945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.486 [2024-11-19 10:58:22.060959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.486 qpair failed and we were unable to recover it. 00:30:32.486 [2024-11-19 10:58:22.070901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.486 [2024-11-19 10:58:22.070957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.486 [2024-11-19 10:58:22.070971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.486 [2024-11-19 10:58:22.070979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.070985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.071001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.080965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.081022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.081036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.081044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.081054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.081070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.090907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.090957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.090973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.090980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.090987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.091002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.101002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.101065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.101081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.101089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.101095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.101111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.111030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.111087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.111102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.111110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.111117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.111133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.120978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.121048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.121066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.121074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.121080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.121096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.131067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.131123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.131138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.131146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.131152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.131169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.141126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.141179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.141194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.141204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.141211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.141227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.151135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.151195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.151214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.151222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.151228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.151245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.161155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.161219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.161236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.161244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.161250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.161266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.171182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.171238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.171257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.171264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.171271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.171287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.181247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.181337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.181352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.181360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.181366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.181381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.191295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.191402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.191417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.487 [2024-11-19 10:58:22.191425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.487 [2024-11-19 10:58:22.191431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.487 [2024-11-19 10:58:22.191448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.487 qpair failed and we were unable to recover it. 00:30:32.487 [2024-11-19 10:58:22.201267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.487 [2024-11-19 10:58:22.201324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.487 [2024-11-19 10:58:22.201340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.488 [2024-11-19 10:58:22.201348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.488 [2024-11-19 10:58:22.201354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.488 [2024-11-19 10:58:22.201370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.488 qpair failed and we were unable to recover it. 00:30:32.488 [2024-11-19 10:58:22.211300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.488 [2024-11-19 10:58:22.211362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.488 [2024-11-19 10:58:22.211378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.488 [2024-11-19 10:58:22.211389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.488 [2024-11-19 10:58:22.211396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.488 [2024-11-19 10:58:22.211411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.488 qpair failed and we were unable to recover it. 00:30:32.488 [2024-11-19 10:58:22.221367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.488 [2024-11-19 10:58:22.221437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.488 [2024-11-19 10:58:22.221452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.488 [2024-11-19 10:58:22.221460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.488 [2024-11-19 10:58:22.221466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.488 [2024-11-19 10:58:22.221482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.488 qpair failed and we were unable to recover it. 00:30:32.488 [2024-11-19 10:58:22.231393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.488 [2024-11-19 10:58:22.231500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.488 [2024-11-19 10:58:22.231514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.488 [2024-11-19 10:58:22.231522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.488 [2024-11-19 10:58:22.231529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.488 [2024-11-19 10:58:22.231545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.488 qpair failed and we were unable to recover it. 00:30:32.488 [2024-11-19 10:58:22.241413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.488 [2024-11-19 10:58:22.241476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.488 [2024-11-19 10:58:22.241490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.488 [2024-11-19 10:58:22.241498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.488 [2024-11-19 10:58:22.241504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.488 [2024-11-19 10:58:22.241519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.488 qpair failed and we were unable to recover it. 00:30:32.488 [2024-11-19 10:58:22.251485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.488 [2024-11-19 10:58:22.251563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.488 [2024-11-19 10:58:22.251578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.488 [2024-11-19 10:58:22.251586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.488 [2024-11-19 10:58:22.251592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.488 [2024-11-19 10:58:22.251606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.488 qpair failed and we were unable to recover it. 00:30:32.488 [2024-11-19 10:58:22.261380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.488 [2024-11-19 10:58:22.261435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.488 [2024-11-19 10:58:22.261449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.488 [2024-11-19 10:58:22.261456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.488 [2024-11-19 10:58:22.261463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.488 [2024-11-19 10:58:22.261478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.488 qpair failed and we were unable to recover it. 00:30:32.488 [2024-11-19 10:58:22.271398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.488 [2024-11-19 10:58:22.271449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.488 [2024-11-19 10:58:22.271463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.488 [2024-11-19 10:58:22.271470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.488 [2024-11-19 10:58:22.271478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.488 [2024-11-19 10:58:22.271493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.488 qpair failed and we were unable to recover it. 00:30:32.749 [2024-11-19 10:58:22.281528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.749 [2024-11-19 10:58:22.281591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.749 [2024-11-19 10:58:22.281606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.749 [2024-11-19 10:58:22.281613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.749 [2024-11-19 10:58:22.281619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.749 [2024-11-19 10:58:22.281635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.749 qpair failed and we were unable to recover it. 00:30:32.749 [2024-11-19 10:58:22.291520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.749 [2024-11-19 10:58:22.291589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.749 [2024-11-19 10:58:22.291604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.749 [2024-11-19 10:58:22.291611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.749 [2024-11-19 10:58:22.291618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.749 [2024-11-19 10:58:22.291633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.749 qpair failed and we were unable to recover it. 00:30:32.749 [2024-11-19 10:58:22.301561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.749 [2024-11-19 10:58:22.301623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.749 [2024-11-19 10:58:22.301637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.749 [2024-11-19 10:58:22.301645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.749 [2024-11-19 10:58:22.301652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.749 [2024-11-19 10:58:22.301667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.749 qpair failed and we were unable to recover it. 00:30:32.749 [2024-11-19 10:58:22.311599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.311656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.311671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.311678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.311685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.311700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.321623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.321678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.321692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.321699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.321706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.321722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.331693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.331748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.331762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.331769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.331775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.331790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.341713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.341814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.341828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.341838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.341844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.341860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.351689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.351742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.351756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.351763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.351769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.351784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.361719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.361773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.361787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.361794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.361800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.361815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.371749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.371802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.371816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.371823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.371830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.371845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.381804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.381896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.381911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.381918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.381924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.381942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.391826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.391888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.391902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.391909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.391916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.391931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.401857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.401928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.401943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.401951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.401957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.401972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.411899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.411959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.411973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.411981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.411987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.412002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.421953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.422058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.422072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.422079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.422086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.422101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.431923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.750 [2024-11-19 10:58:22.431979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.750 [2024-11-19 10:58:22.431993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.750 [2024-11-19 10:58:22.432000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.750 [2024-11-19 10:58:22.432007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.750 [2024-11-19 10:58:22.432021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.750 qpair failed and we were unable to recover it. 00:30:32.750 [2024-11-19 10:58:22.441959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.751 [2024-11-19 10:58:22.442014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.751 [2024-11-19 10:58:22.442029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.751 [2024-11-19 10:58:22.442036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.751 [2024-11-19 10:58:22.442042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.751 [2024-11-19 10:58:22.442058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.751 qpair failed and we were unable to recover it. 00:30:32.751 [2024-11-19 10:58:22.451976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.751 [2024-11-19 10:58:22.452029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.751 [2024-11-19 10:58:22.452044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.751 [2024-11-19 10:58:22.452051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.751 [2024-11-19 10:58:22.452057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.751 [2024-11-19 10:58:22.452073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.751 qpair failed and we were unable to recover it. 00:30:32.751 [2024-11-19 10:58:22.462066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.751 [2024-11-19 10:58:22.462127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.751 [2024-11-19 10:58:22.462141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.751 [2024-11-19 10:58:22.462148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.751 [2024-11-19 10:58:22.462155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.751 [2024-11-19 10:58:22.462170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.751 qpair failed and we were unable to recover it. 00:30:32.751 [2024-11-19 10:58:22.472074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.751 [2024-11-19 10:58:22.472128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.751 [2024-11-19 10:58:22.472146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.751 [2024-11-19 10:58:22.472154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.751 [2024-11-19 10:58:22.472160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.751 [2024-11-19 10:58:22.472175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.751 qpair failed and we were unable to recover it. 00:30:32.751 [2024-11-19 10:58:22.482070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.751 [2024-11-19 10:58:22.482124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.751 [2024-11-19 10:58:22.482137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.751 [2024-11-19 10:58:22.482145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.751 [2024-11-19 10:58:22.482151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.751 [2024-11-19 10:58:22.482166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.751 qpair failed and we were unable to recover it. 00:30:32.751 [2024-11-19 10:58:22.492160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.751 [2024-11-19 10:58:22.492220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.751 [2024-11-19 10:58:22.492237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.751 [2024-11-19 10:58:22.492246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.751 [2024-11-19 10:58:22.492253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.751 [2024-11-19 10:58:22.492269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.751 qpair failed and we were unable to recover it. 00:30:32.751 [2024-11-19 10:58:22.502145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.751 [2024-11-19 10:58:22.502209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.751 [2024-11-19 10:58:22.502224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.751 [2024-11-19 10:58:22.502231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.751 [2024-11-19 10:58:22.502238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.751 [2024-11-19 10:58:22.502253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.751 qpair failed and we were unable to recover it. 00:30:32.751 [2024-11-19 10:58:22.512175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.751 [2024-11-19 10:58:22.512235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.751 [2024-11-19 10:58:22.512250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.751 [2024-11-19 10:58:22.512257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.751 [2024-11-19 10:58:22.512267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.751 [2024-11-19 10:58:22.512283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.751 qpair failed and we were unable to recover it. 00:30:32.751 [2024-11-19 10:58:22.522196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.751 [2024-11-19 10:58:22.522284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.751 [2024-11-19 10:58:22.522298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.751 [2024-11-19 10:58:22.522306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.751 [2024-11-19 10:58:22.522312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.751 [2024-11-19 10:58:22.522328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.751 qpair failed and we were unable to recover it. 00:30:32.751 [2024-11-19 10:58:22.532205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.751 [2024-11-19 10:58:22.532259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.751 [2024-11-19 10:58:22.532273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.751 [2024-11-19 10:58:22.532281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.751 [2024-11-19 10:58:22.532287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:32.751 [2024-11-19 10:58:22.532303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.751 qpair failed and we were unable to recover it. 00:30:33.012 [2024-11-19 10:58:22.542262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.012 [2024-11-19 10:58:22.542319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.012 [2024-11-19 10:58:22.542332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.012 [2024-11-19 10:58:22.542339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.012 [2024-11-19 10:58:22.542346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.012 [2024-11-19 10:58:22.542361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.012 qpair failed and we were unable to recover it. 00:30:33.012 [2024-11-19 10:58:22.552293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.012 [2024-11-19 10:58:22.552350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.012 [2024-11-19 10:58:22.552364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.012 [2024-11-19 10:58:22.552372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.012 [2024-11-19 10:58:22.552378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.012 [2024-11-19 10:58:22.552394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.012 qpair failed and we were unable to recover it. 00:30:33.012 [2024-11-19 10:58:22.562314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.012 [2024-11-19 10:58:22.562371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.012 [2024-11-19 10:58:22.562385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.012 [2024-11-19 10:58:22.562392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.012 [2024-11-19 10:58:22.562399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.012 [2024-11-19 10:58:22.562415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.012 qpair failed and we were unable to recover it. 00:30:33.012 [2024-11-19 10:58:22.572392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.012 [2024-11-19 10:58:22.572446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.012 [2024-11-19 10:58:22.572460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.012 [2024-11-19 10:58:22.572467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.012 [2024-11-19 10:58:22.572474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.012 [2024-11-19 10:58:22.572489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.012 qpair failed and we were unable to recover it. 00:30:33.012 [2024-11-19 10:58:22.582389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.012 [2024-11-19 10:58:22.582451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.012 [2024-11-19 10:58:22.582464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.012 [2024-11-19 10:58:22.582472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.012 [2024-11-19 10:58:22.582478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.012 [2024-11-19 10:58:22.582493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.012 qpair failed and we were unable to recover it. 00:30:33.012 [2024-11-19 10:58:22.592421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.012 [2024-11-19 10:58:22.592478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.012 [2024-11-19 10:58:22.592494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.012 [2024-11-19 10:58:22.592502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.012 [2024-11-19 10:58:22.592509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.012 [2024-11-19 10:58:22.592524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.012 qpair failed and we were unable to recover it. 00:30:33.012 [2024-11-19 10:58:22.602446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.012 [2024-11-19 10:58:22.602528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.012 [2024-11-19 10:58:22.602546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.012 [2024-11-19 10:58:22.602553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.012 [2024-11-19 10:58:22.602559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.012 [2024-11-19 10:58:22.602574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.012 qpair failed and we were unable to recover it. 00:30:33.012 [2024-11-19 10:58:22.612467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.012 [2024-11-19 10:58:22.612527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.012 [2024-11-19 10:58:22.612541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.012 [2024-11-19 10:58:22.612549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.012 [2024-11-19 10:58:22.612555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.012 [2024-11-19 10:58:22.612570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.012 qpair failed and we were unable to recover it. 00:30:33.012 [2024-11-19 10:58:22.622547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.012 [2024-11-19 10:58:22.622652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.012 [2024-11-19 10:58:22.622667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.012 [2024-11-19 10:58:22.622674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.012 [2024-11-19 10:58:22.622681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.012 [2024-11-19 10:58:22.622696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.012 qpair failed and we were unable to recover it. 00:30:33.012 [2024-11-19 10:58:22.632526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.012 [2024-11-19 10:58:22.632592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.012 [2024-11-19 10:58:22.632606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.012 [2024-11-19 10:58:22.632615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.012 [2024-11-19 10:58:22.632621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.012 [2024-11-19 10:58:22.632636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.012 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.642567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.642619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.642633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.642641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.642651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.642665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.652557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.652617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.652631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.652639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.652645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.652660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.662615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.662690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.662705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.662713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.662719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.662735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.672678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.672742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.672757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.672764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.672771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.672786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.682662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.682716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.682731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.682739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.682745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.682761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.692687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.692742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.692756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.692765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.692771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.692786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.702729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.702789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.702804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.702811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.702818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.702833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.712805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.712865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.712878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.712886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.712892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.712907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.722791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.722841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.722855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.722862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.722869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.722884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.732878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.732977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.732995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.733002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.733009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.733024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.742841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.742896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.742910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.742917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.742924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.742940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.752861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.752921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.752934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.752942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.752948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.752963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.013 [2024-11-19 10:58:22.762917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.013 [2024-11-19 10:58:22.762987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.013 [2024-11-19 10:58:22.763002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.013 [2024-11-19 10:58:22.763009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.013 [2024-11-19 10:58:22.763015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.013 [2024-11-19 10:58:22.763031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.013 qpair failed and we were unable to recover it. 00:30:33.014 [2024-11-19 10:58:22.772950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.014 [2024-11-19 10:58:22.773008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.014 [2024-11-19 10:58:22.773022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.014 [2024-11-19 10:58:22.773033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.014 [2024-11-19 10:58:22.773039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.014 [2024-11-19 10:58:22.773054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.014 qpair failed and we were unable to recover it. 00:30:33.014 [2024-11-19 10:58:22.782958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.014 [2024-11-19 10:58:22.783013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.014 [2024-11-19 10:58:22.783029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.014 [2024-11-19 10:58:22.783036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.014 [2024-11-19 10:58:22.783043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.014 [2024-11-19 10:58:22.783059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.014 qpair failed and we were unable to recover it. 00:30:33.014 [2024-11-19 10:58:22.792985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.014 [2024-11-19 10:58:22.793043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.014 [2024-11-19 10:58:22.793058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.014 [2024-11-19 10:58:22.793065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.014 [2024-11-19 10:58:22.793072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.014 [2024-11-19 10:58:22.793088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.014 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-19 10:58:22.803040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-19 10:58:22.803089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-19 10:58:22.803103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-19 10:58:22.803110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-19 10:58:22.803117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.274 [2024-11-19 10:58:22.803133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-19 10:58:22.813029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-19 10:58:22.813083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-19 10:58:22.813097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-19 10:58:22.813104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-19 10:58:22.813111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.274 [2024-11-19 10:58:22.813127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-19 10:58:22.823060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-19 10:58:22.823165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-19 10:58:22.823180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-19 10:58:22.823187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-19 10:58:22.823193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.274 [2024-11-19 10:58:22.823215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-19 10:58:22.833099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-19 10:58:22.833154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-19 10:58:22.833168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-19 10:58:22.833175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-19 10:58:22.833181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.274 [2024-11-19 10:58:22.833197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-19 10:58:22.843120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-19 10:58:22.843174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-19 10:58:22.843189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-19 10:58:22.843196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-19 10:58:22.843206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.274 [2024-11-19 10:58:22.843222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-19 10:58:22.853142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-19 10:58:22.853216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-19 10:58:22.853231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-19 10:58:22.853239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-19 10:58:22.853246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.274 [2024-11-19 10:58:22.853262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-19 10:58:22.863184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.863251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.863266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.863273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.863279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.863295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.873263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.873324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.873339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.873346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.873353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.873369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.883238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.883289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.883303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.883310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.883317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.883332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.893261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.893329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.893344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.893351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.893358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.893375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.903313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.903387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.903404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.903415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.903422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.903437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.913322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.913389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.913404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.913412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.913418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.913433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.923354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.923409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.923423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.923430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.923437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.923452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.933391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.933443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.933458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.933465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.933471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.933486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.943378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.943443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.943458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.943466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.943472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.943491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.953437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.953495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.953510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.953517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.953524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.953539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.963483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.963534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.963548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.963555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.963561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.963577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.973523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.973580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.973594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.973602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.973609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.973624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.983579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-19 10:58:22.983684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-19 10:58:22.983699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-19 10:58:22.983706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-19 10:58:22.983713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.275 [2024-11-19 10:58:22.983728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-19 10:58:22.993582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.276 [2024-11-19 10:58:22.993658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.276 [2024-11-19 10:58:22.993673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.276 [2024-11-19 10:58:22.993680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.276 [2024-11-19 10:58:22.993686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.276 [2024-11-19 10:58:22.993702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.276 qpair failed and we were unable to recover it. 00:30:33.276 [2024-11-19 10:58:23.003590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.276 [2024-11-19 10:58:23.003646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.276 [2024-11-19 10:58:23.003661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.276 [2024-11-19 10:58:23.003670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.276 [2024-11-19 10:58:23.003678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.276 [2024-11-19 10:58:23.003693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.276 qpair failed and we were unable to recover it. 00:30:33.276 [2024-11-19 10:58:23.013612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.276 [2024-11-19 10:58:23.013689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.276 [2024-11-19 10:58:23.013704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.276 [2024-11-19 10:58:23.013711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.276 [2024-11-19 10:58:23.013717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.276 [2024-11-19 10:58:23.013732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.276 qpair failed and we were unable to recover it. 00:30:33.276 [2024-11-19 10:58:23.023643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.276 [2024-11-19 10:58:23.023699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.276 [2024-11-19 10:58:23.023712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.276 [2024-11-19 10:58:23.023720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.276 [2024-11-19 10:58:23.023726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.276 [2024-11-19 10:58:23.023742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.276 qpair failed and we were unable to recover it. 00:30:33.276 [2024-11-19 10:58:23.033718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.276 [2024-11-19 10:58:23.033775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.276 [2024-11-19 10:58:23.033792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.276 [2024-11-19 10:58:23.033799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.276 [2024-11-19 10:58:23.033806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.276 [2024-11-19 10:58:23.033822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.276 qpair failed and we were unable to recover it. 00:30:33.276 [2024-11-19 10:58:23.043727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.276 [2024-11-19 10:58:23.043827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.276 [2024-11-19 10:58:23.043841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.276 [2024-11-19 10:58:23.043848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.276 [2024-11-19 10:58:23.043855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.276 [2024-11-19 10:58:23.043870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.276 qpair failed and we were unable to recover it. 00:30:33.276 [2024-11-19 10:58:23.053723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.276 [2024-11-19 10:58:23.053831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.276 [2024-11-19 10:58:23.053845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.276 [2024-11-19 10:58:23.053852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.276 [2024-11-19 10:58:23.053859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.276 [2024-11-19 10:58:23.053874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.276 qpair failed and we were unable to recover it. 00:30:33.536 [2024-11-19 10:58:23.063768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.536 [2024-11-19 10:58:23.063856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.536 [2024-11-19 10:58:23.063870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.536 [2024-11-19 10:58:23.063877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.536 [2024-11-19 10:58:23.063884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.536 [2024-11-19 10:58:23.063898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.536 qpair failed and we were unable to recover it. 00:30:33.536 [2024-11-19 10:58:23.073779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.536 [2024-11-19 10:58:23.073836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.536 [2024-11-19 10:58:23.073850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.536 [2024-11-19 10:58:23.073856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.536 [2024-11-19 10:58:23.073866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.536 [2024-11-19 10:58:23.073882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.536 qpair failed and we were unable to recover it. 00:30:33.536 [2024-11-19 10:58:23.083853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.536 [2024-11-19 10:58:23.083918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.536 [2024-11-19 10:58:23.083932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.536 [2024-11-19 10:58:23.083940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.536 [2024-11-19 10:58:23.083946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.536 [2024-11-19 10:58:23.083961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.536 qpair failed and we were unable to recover it. 00:30:33.536 [2024-11-19 10:58:23.093775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.536 [2024-11-19 10:58:23.093830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.536 [2024-11-19 10:58:23.093845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.536 [2024-11-19 10:58:23.093852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.536 [2024-11-19 10:58:23.093859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.536 [2024-11-19 10:58:23.093874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.536 qpair failed and we were unable to recover it. 00:30:33.536 [2024-11-19 10:58:23.103913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.104010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.104026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.104033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.104040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.104055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.113904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.113966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.113979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.113987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.113994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.114009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.123836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.123900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.123917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.123924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.123931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.123947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.133927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.133983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.133999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.134007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.134014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.134030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.143970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.144046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.144063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.144071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.144078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.144094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.153995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.154052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.154068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.154076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.154083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.154099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.163948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.164001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.164023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.164034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.164043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.164062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.173962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.174021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.174038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.174046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.174053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.174069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.184045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.184133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.184148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.184155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.184162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.184177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.194103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.194157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.194173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.194180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.194188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.194208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.204162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.204224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.204240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.204249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.204258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.204273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.214173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.214234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.214248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.214256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.214262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.214277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.224153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.224225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.537 [2024-11-19 10:58:23.224240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.537 [2024-11-19 10:58:23.224248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.537 [2024-11-19 10:58:23.224255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.537 [2024-11-19 10:58:23.224271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.537 qpair failed and we were unable to recover it. 00:30:33.537 [2024-11-19 10:58:23.234224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.537 [2024-11-19 10:58:23.234289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.538 [2024-11-19 10:58:23.234304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.538 [2024-11-19 10:58:23.234311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.538 [2024-11-19 10:58:23.234317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.538 [2024-11-19 10:58:23.234333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.538 qpair failed and we were unable to recover it. 00:30:33.538 [2024-11-19 10:58:23.244198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.538 [2024-11-19 10:58:23.244259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.538 [2024-11-19 10:58:23.244273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.538 [2024-11-19 10:58:23.244281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.538 [2024-11-19 10:58:23.244287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.538 [2024-11-19 10:58:23.244302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.538 qpair failed and we were unable to recover it. 00:30:33.538 [2024-11-19 10:58:23.254270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.538 [2024-11-19 10:58:23.254327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.538 [2024-11-19 10:58:23.254341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.538 [2024-11-19 10:58:23.254349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.538 [2024-11-19 10:58:23.254356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.538 [2024-11-19 10:58:23.254371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.538 qpair failed and we were unable to recover it. 00:30:33.538 [2024-11-19 10:58:23.264266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.538 [2024-11-19 10:58:23.264324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.538 [2024-11-19 10:58:23.264339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.538 [2024-11-19 10:58:23.264346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.538 [2024-11-19 10:58:23.264352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.538 [2024-11-19 10:58:23.264369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.538 qpair failed and we were unable to recover it. 00:30:33.538 [2024-11-19 10:58:23.274275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.538 [2024-11-19 10:58:23.274333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.538 [2024-11-19 10:58:23.274348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.538 [2024-11-19 10:58:23.274356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.538 [2024-11-19 10:58:23.274363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.538 [2024-11-19 10:58:23.274379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.538 qpair failed and we were unable to recover it. 00:30:33.538 [2024-11-19 10:58:23.284361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.538 [2024-11-19 10:58:23.284412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.538 [2024-11-19 10:58:23.284427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.538 [2024-11-19 10:58:23.284434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.538 [2024-11-19 10:58:23.284441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.538 [2024-11-19 10:58:23.284455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.538 qpair failed and we were unable to recover it. 00:30:33.538 [2024-11-19 10:58:23.294322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.538 [2024-11-19 10:58:23.294428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.538 [2024-11-19 10:58:23.294450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.538 [2024-11-19 10:58:23.294459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.538 [2024-11-19 10:58:23.294465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.538 [2024-11-19 10:58:23.294482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.538 qpair failed and we were unable to recover it. 00:30:33.538 [2024-11-19 10:58:23.304364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.538 [2024-11-19 10:58:23.304434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.538 [2024-11-19 10:58:23.304451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.538 [2024-11-19 10:58:23.304458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.538 [2024-11-19 10:58:23.304464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.538 [2024-11-19 10:58:23.304480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.538 qpair failed and we were unable to recover it. 00:30:33.538 [2024-11-19 10:58:23.314384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.538 [2024-11-19 10:58:23.314437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.538 [2024-11-19 10:58:23.314452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.538 [2024-11-19 10:58:23.314459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.538 [2024-11-19 10:58:23.314466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.538 [2024-11-19 10:58:23.314482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.538 qpair failed and we were unable to recover it. 00:30:33.538 [2024-11-19 10:58:23.324466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.538 [2024-11-19 10:58:23.324520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.538 [2024-11-19 10:58:23.324534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.538 [2024-11-19 10:58:23.324542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.538 [2024-11-19 10:58:23.324548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.538 [2024-11-19 10:58:23.324563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.538 qpair failed and we were unable to recover it. 00:30:33.798 [2024-11-19 10:58:23.334571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.798 [2024-11-19 10:58:23.334642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.798 [2024-11-19 10:58:23.334658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.798 [2024-11-19 10:58:23.334669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.798 [2024-11-19 10:58:23.334676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.798 [2024-11-19 10:58:23.334692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-11-19 10:58:23.344600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.798 [2024-11-19 10:58:23.344658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.798 [2024-11-19 10:58:23.344674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.798 [2024-11-19 10:58:23.344682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.798 [2024-11-19 10:58:23.344689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.344704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.354505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.354560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.354575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.354583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.354590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.354605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.364558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.364651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.364667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.364674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.364680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.364696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.374588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.374640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.374655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.374662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.374668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.374687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.384577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.384634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.384651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.384658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.384665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.384681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.394659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.394715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.394731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.394739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.394746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.394761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.404710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.404764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.404779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.404788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.404795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.404811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.414651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.414717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.414733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.414740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.414747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.414763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.424760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.424820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.424836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.424845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.424852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.424867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.434733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.434789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.434804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.434811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.434818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.434833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.444749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.444837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.444854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.444861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.444867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.444883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.454863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.454921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.454937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.454944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.454951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.454966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.464821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.464895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.464911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.464921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.799 [2024-11-19 10:58:23.464928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.799 [2024-11-19 10:58:23.464943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-11-19 10:58:23.474833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.799 [2024-11-19 10:58:23.474892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.799 [2024-11-19 10:58:23.474908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.799 [2024-11-19 10:58:23.474916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.474922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.474938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-11-19 10:58:23.484850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.800 [2024-11-19 10:58:23.484906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.800 [2024-11-19 10:58:23.484923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.800 [2024-11-19 10:58:23.484929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.484936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.484952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-11-19 10:58:23.494960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.800 [2024-11-19 10:58:23.495012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.800 [2024-11-19 10:58:23.495027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.800 [2024-11-19 10:58:23.495035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.495042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.495058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-11-19 10:58:23.504982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.800 [2024-11-19 10:58:23.505038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.800 [2024-11-19 10:58:23.505055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.800 [2024-11-19 10:58:23.505062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.505070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.505090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-11-19 10:58:23.514960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.800 [2024-11-19 10:58:23.515055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.800 [2024-11-19 10:58:23.515071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.800 [2024-11-19 10:58:23.515079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.515085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.515101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-11-19 10:58:23.525006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.800 [2024-11-19 10:58:23.525103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.800 [2024-11-19 10:58:23.525119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.800 [2024-11-19 10:58:23.525128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.525134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.525150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-11-19 10:58:23.535024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.800 [2024-11-19 10:58:23.535075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.800 [2024-11-19 10:58:23.535090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.800 [2024-11-19 10:58:23.535099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.535106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.535121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-11-19 10:58:23.545121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.800 [2024-11-19 10:58:23.545177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.800 [2024-11-19 10:58:23.545193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.800 [2024-11-19 10:58:23.545205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.545212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.545228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-11-19 10:58:23.555116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.800 [2024-11-19 10:58:23.555179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.800 [2024-11-19 10:58:23.555195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.800 [2024-11-19 10:58:23.555206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.555213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.555230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-11-19 10:58:23.565176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.800 [2024-11-19 10:58:23.565233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.800 [2024-11-19 10:58:23.565250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.800 [2024-11-19 10:58:23.565257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.565264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.565279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-11-19 10:58:23.575244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.800 [2024-11-19 10:58:23.575360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.800 [2024-11-19 10:58:23.575376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.800 [2024-11-19 10:58:23.575385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.575391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.575408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-11-19 10:58:23.585244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.800 [2024-11-19 10:58:23.585300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.800 [2024-11-19 10:58:23.585316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.800 [2024-11-19 10:58:23.585325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.800 [2024-11-19 10:58:23.585332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:33.800 [2024-11-19 10:58:23.585348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:33.800 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.595234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.061 [2024-11-19 10:58:23.595288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.061 [2024-11-19 10:58:23.595307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.061 [2024-11-19 10:58:23.595315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.061 [2024-11-19 10:58:23.595322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.061 [2024-11-19 10:58:23.595338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.061 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.605208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.061 [2024-11-19 10:58:23.605296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.061 [2024-11-19 10:58:23.605312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.061 [2024-11-19 10:58:23.605320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.061 [2024-11-19 10:58:23.605328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.061 [2024-11-19 10:58:23.605344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.061 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.615230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.061 [2024-11-19 10:58:23.615296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.061 [2024-11-19 10:58:23.615311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.061 [2024-11-19 10:58:23.615319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.061 [2024-11-19 10:58:23.615325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.061 [2024-11-19 10:58:23.615341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.061 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.625331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.061 [2024-11-19 10:58:23.625390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.061 [2024-11-19 10:58:23.625404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.061 [2024-11-19 10:58:23.625411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.061 [2024-11-19 10:58:23.625417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.061 [2024-11-19 10:58:23.625432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.061 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.635341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.061 [2024-11-19 10:58:23.635407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.061 [2024-11-19 10:58:23.635422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.061 [2024-11-19 10:58:23.635429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.061 [2024-11-19 10:58:23.635439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.061 [2024-11-19 10:58:23.635455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.061 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.645412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.061 [2024-11-19 10:58:23.645470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.061 [2024-11-19 10:58:23.645486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.061 [2024-11-19 10:58:23.645493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.061 [2024-11-19 10:58:23.645499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.061 [2024-11-19 10:58:23.645514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.061 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.655401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.061 [2024-11-19 10:58:23.655452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.061 [2024-11-19 10:58:23.655466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.061 [2024-11-19 10:58:23.655473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.061 [2024-11-19 10:58:23.655480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.061 [2024-11-19 10:58:23.655496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.061 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.665517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.061 [2024-11-19 10:58:23.665614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.061 [2024-11-19 10:58:23.665629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.061 [2024-11-19 10:58:23.665636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.061 [2024-11-19 10:58:23.665642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.061 [2024-11-19 10:58:23.665656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.061 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.675500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.061 [2024-11-19 10:58:23.675558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.061 [2024-11-19 10:58:23.675572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.061 [2024-11-19 10:58:23.675579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.061 [2024-11-19 10:58:23.675585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.061 [2024-11-19 10:58:23.675601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.061 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.685534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.061 [2024-11-19 10:58:23.685640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.061 [2024-11-19 10:58:23.685654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.061 [2024-11-19 10:58:23.685661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.061 [2024-11-19 10:58:23.685668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.061 [2024-11-19 10:58:23.685683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.061 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.695522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.061 [2024-11-19 10:58:23.695592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.061 [2024-11-19 10:58:23.695607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.061 [2024-11-19 10:58:23.695614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.061 [2024-11-19 10:58:23.695620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.061 [2024-11-19 10:58:23.695636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.061 qpair failed and we were unable to recover it. 00:30:34.061 [2024-11-19 10:58:23.705490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.705557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.705571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.705579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.705585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.705600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.715574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.715629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.715643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.715650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.715657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.715672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.725605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.725660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.725677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.725684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.725691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.725706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.735667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.735724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.735738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.735745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.735751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.735766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.745669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.745741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.745756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.745764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.745770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.745784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.755690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.755747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.755763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.755770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.755777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.755792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.765738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.765805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.765820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.765828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.765837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.765853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.775749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.775803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.775817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.775824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.775830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.775845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.785763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.785818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.785833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.785840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.785846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.785861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.795809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.795863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.795878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.795885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.795891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.795906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.805851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.805906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.805921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.805928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.805935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.805950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.815849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.815905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.815919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.815927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.815933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.815949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.825930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.062 [2024-11-19 10:58:23.826031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.062 [2024-11-19 10:58:23.826046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.062 [2024-11-19 10:58:23.826053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.062 [2024-11-19 10:58:23.826059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.062 [2024-11-19 10:58:23.826074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.062 qpair failed and we were unable to recover it. 00:30:34.062 [2024-11-19 10:58:23.835919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.063 [2024-11-19 10:58:23.835979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.063 [2024-11-19 10:58:23.835993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.063 [2024-11-19 10:58:23.836001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.063 [2024-11-19 10:58:23.836007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.063 [2024-11-19 10:58:23.836023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.063 qpair failed and we were unable to recover it. 00:30:34.063 [2024-11-19 10:58:23.845942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.063 [2024-11-19 10:58:23.846002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.063 [2024-11-19 10:58:23.846016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.063 [2024-11-19 10:58:23.846023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.063 [2024-11-19 10:58:23.846031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.063 [2024-11-19 10:58:23.846046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.063 qpair failed and we were unable to recover it. 00:30:34.323 [2024-11-19 10:58:23.855978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.323 [2024-11-19 10:58:23.856036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.323 [2024-11-19 10:58:23.856056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.323 [2024-11-19 10:58:23.856064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.323 [2024-11-19 10:58:23.856070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.323 [2024-11-19 10:58:23.856086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.323 qpair failed and we were unable to recover it. 00:30:34.323 [2024-11-19 10:58:23.866042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.323 [2024-11-19 10:58:23.866142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.323 [2024-11-19 10:58:23.866157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.323 [2024-11-19 10:58:23.866164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.323 [2024-11-19 10:58:23.866170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.323 [2024-11-19 10:58:23.866186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.323 qpair failed and we were unable to recover it. 00:30:34.323 [2024-11-19 10:58:23.876026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.323 [2024-11-19 10:58:23.876082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.323 [2024-11-19 10:58:23.876096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.323 [2024-11-19 10:58:23.876103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.323 [2024-11-19 10:58:23.876110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.323 [2024-11-19 10:58:23.876126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.323 qpair failed and we were unable to recover it. 00:30:34.323 [2024-11-19 10:58:23.886006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.323 [2024-11-19 10:58:23.886069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.323 [2024-11-19 10:58:23.886083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.323 [2024-11-19 10:58:23.886091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.323 [2024-11-19 10:58:23.886097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.323 [2024-11-19 10:58:23.886112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.323 qpair failed and we were unable to recover it. 00:30:34.323 [2024-11-19 10:58:23.896030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.323 [2024-11-19 10:58:23.896082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.323 [2024-11-19 10:58:23.896097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.323 [2024-11-19 10:58:23.896107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.323 [2024-11-19 10:58:23.896114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.323 [2024-11-19 10:58:23.896129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.323 qpair failed and we were unable to recover it. 00:30:34.323 [2024-11-19 10:58:23.906181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.323 [2024-11-19 10:58:23.906277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.323 [2024-11-19 10:58:23.906295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.323 [2024-11-19 10:58:23.906303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.323 [2024-11-19 10:58:23.906309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.323 [2024-11-19 10:58:23.906326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.323 qpair failed and we were unable to recover it. 00:30:34.323 [2024-11-19 10:58:23.916147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.323 [2024-11-19 10:58:23.916206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.323 [2024-11-19 10:58:23.916220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.323 [2024-11-19 10:58:23.916228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.323 [2024-11-19 10:58:23.916235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.323 [2024-11-19 10:58:23.916250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.323 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:23.926210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:23.926272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:23.926286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:23.926294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:23.926300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:23.926315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:23.936191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:23.936251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:23.936266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:23.936273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:23.936279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:23.936298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:23.946264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:23.946322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:23.946337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:23.946344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:23.946351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:23.946366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:23.956185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:23.956248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:23.956263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:23.956272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:23.956281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:23.956298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:23.966295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:23.966360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:23.966374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:23.966382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:23.966388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:23.966404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:23.976244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:23.976318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:23.976333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:23.976340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:23.976346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:23.976362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:23.986269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:23.986331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:23.986345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:23.986352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:23.986358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:23.986373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:23.996360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:23.996446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:23.996460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:23.996467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:23.996473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:23.996487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:24.006434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:24.006521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:24.006535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:24.006542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:24.006548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:24.006563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:24.016347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:24.016410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:24.016424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:24.016432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:24.016438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:24.016453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:24.026464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:24.026520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:24.026534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:24.026544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:24.026551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:24.026566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:24.036500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:24.036552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:24.036568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:24.036575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:24.036582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:24.036597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.324 qpair failed and we were unable to recover it. 00:30:34.324 [2024-11-19 10:58:24.046498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.324 [2024-11-19 10:58:24.046551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.324 [2024-11-19 10:58:24.046565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.324 [2024-11-19 10:58:24.046571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.324 [2024-11-19 10:58:24.046578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.324 [2024-11-19 10:58:24.046593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.325 qpair failed and we were unable to recover it. 00:30:34.325 [2024-11-19 10:58:24.056557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.325 [2024-11-19 10:58:24.056625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.325 [2024-11-19 10:58:24.056640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.325 [2024-11-19 10:58:24.056647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.325 [2024-11-19 10:58:24.056653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6b40000b90 00:30:34.325 [2024-11-19 10:58:24.056668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.325 qpair failed and we were unable to recover it. 00:30:34.325 [2024-11-19 10:58:24.056776] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:34.325 A controller has encountered a failure and is being reset. 00:30:34.584 Controller properly reset. 00:30:34.584 Initializing NVMe Controllers 00:30:34.584 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:34.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:34.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:34.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:34.584 Initialization complete. Launching workers. 00:30:34.584 Starting thread on core 1 00:30:34.584 Starting thread on core 2 00:30:34.584 Starting thread on core 3 00:30:34.584 Starting thread on core 0 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:34.584 00:30:34.584 real 0m10.821s 00:30:34.584 user 0m19.368s 00:30:34.584 sys 0m4.714s 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.584 ************************************ 00:30:34.584 END TEST nvmf_target_disconnect_tc2 00:30:34.584 ************************************ 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:34.584 rmmod nvme_tcp 00:30:34.584 rmmod nvme_fabrics 00:30:34.584 rmmod nvme_keyring 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 4091793 ']' 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 4091793 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 4091793 ']' 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 4091793 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.584 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4091793 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4091793' 00:30:34.843 killing process with pid 4091793 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 4091793 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 4091793 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.843 10:58:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.380 10:58:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:37.380 00:30:37.380 real 0m19.569s 00:30:37.380 user 0m47.165s 00:30:37.380 sys 0m9.571s 00:30:37.380 10:58:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.381 10:58:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:37.381 ************************************ 00:30:37.381 END TEST nvmf_target_disconnect 00:30:37.381 ************************************ 00:30:37.381 10:58:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:37.381 00:30:37.381 real 5m56.291s 00:30:37.381 user 10m40.799s 00:30:37.381 sys 1m58.586s 00:30:37.381 10:58:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.381 10:58:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.381 ************************************ 00:30:37.381 END TEST nvmf_host 00:30:37.381 ************************************ 00:30:37.381 10:58:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:37.381 10:58:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:37.381 10:58:26 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:37.381 10:58:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:37.381 10:58:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:37.381 10:58:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:37.381 ************************************ 00:30:37.381 START TEST nvmf_target_core_interrupt_mode 00:30:37.381 ************************************ 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:37.381 * Looking for test storage... 00:30:37.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:37.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.381 --rc genhtml_branch_coverage=1 00:30:37.381 --rc genhtml_function_coverage=1 00:30:37.381 --rc genhtml_legend=1 00:30:37.381 --rc geninfo_all_blocks=1 00:30:37.381 --rc geninfo_unexecuted_blocks=1 00:30:37.381 00:30:37.381 ' 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:37.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.381 --rc genhtml_branch_coverage=1 00:30:37.381 --rc genhtml_function_coverage=1 00:30:37.381 --rc genhtml_legend=1 00:30:37.381 --rc geninfo_all_blocks=1 00:30:37.381 --rc geninfo_unexecuted_blocks=1 00:30:37.381 00:30:37.381 ' 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:37.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.381 --rc genhtml_branch_coverage=1 00:30:37.381 --rc genhtml_function_coverage=1 00:30:37.381 --rc genhtml_legend=1 00:30:37.381 --rc geninfo_all_blocks=1 00:30:37.381 --rc geninfo_unexecuted_blocks=1 00:30:37.381 00:30:37.381 ' 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:37.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.381 --rc genhtml_branch_coverage=1 00:30:37.381 --rc genhtml_function_coverage=1 00:30:37.381 --rc genhtml_legend=1 00:30:37.381 --rc geninfo_all_blocks=1 00:30:37.381 --rc geninfo_unexecuted_blocks=1 00:30:37.381 00:30:37.381 ' 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.381 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:37.382 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:37.382 ************************************ 00:30:37.382 START TEST nvmf_abort 00:30:37.382 ************************************ 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:37.382 * Looking for test storage... 00:30:37.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.382 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:37.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.642 --rc genhtml_branch_coverage=1 00:30:37.642 --rc genhtml_function_coverage=1 00:30:37.642 --rc genhtml_legend=1 00:30:37.642 --rc geninfo_all_blocks=1 00:30:37.642 --rc geninfo_unexecuted_blocks=1 00:30:37.642 00:30:37.642 ' 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:37.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.642 --rc genhtml_branch_coverage=1 00:30:37.642 --rc genhtml_function_coverage=1 00:30:37.642 --rc genhtml_legend=1 00:30:37.642 --rc geninfo_all_blocks=1 00:30:37.642 --rc geninfo_unexecuted_blocks=1 00:30:37.642 00:30:37.642 ' 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:37.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.642 --rc genhtml_branch_coverage=1 00:30:37.642 --rc genhtml_function_coverage=1 00:30:37.642 --rc genhtml_legend=1 00:30:37.642 --rc geninfo_all_blocks=1 00:30:37.642 --rc geninfo_unexecuted_blocks=1 00:30:37.642 00:30:37.642 ' 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:37.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.642 --rc genhtml_branch_coverage=1 00:30:37.642 --rc genhtml_function_coverage=1 00:30:37.642 --rc genhtml_legend=1 00:30:37.642 --rc geninfo_all_blocks=1 00:30:37.642 --rc geninfo_unexecuted_blocks=1 00:30:37.642 00:30:37.642 ' 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.642 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:37.643 10:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:44.222 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:44.222 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.222 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:44.223 Found net devices under 0000:86:00.0: cvl_0_0 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:44.223 Found net devices under 0000:86:00.1: cvl_0_1 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:44.223 10:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:44.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:30:44.223 00:30:44.223 --- 10.0.0.2 ping statistics --- 00:30:44.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.223 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:30:44.223 00:30:44.223 --- 10.0.0.1 ping statistics --- 00:30:44.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.223 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4096324 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4096324 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 4096324 ']' 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.223 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.223 [2024-11-19 10:58:33.185292] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:44.223 [2024-11-19 10:58:33.186212] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:30:44.223 [2024-11-19 10:58:33.186249] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.223 [2024-11-19 10:58:33.264525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:44.223 [2024-11-19 10:58:33.303945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.223 [2024-11-19 10:58:33.303981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.223 [2024-11-19 10:58:33.303988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.223 [2024-11-19 10:58:33.303994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.223 [2024-11-19 10:58:33.303998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.223 [2024-11-19 10:58:33.305387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.223 [2024-11-19 10:58:33.305515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.223 [2024-11-19 10:58:33.305516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.223 [2024-11-19 10:58:33.372184] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:44.223 [2024-11-19 10:58:33.372944] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:44.223 [2024-11-19 10:58:33.373248] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:44.223 [2024-11-19 10:58:33.373389] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.482 [2024-11-19 10:58:34.062215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.482 Malloc0 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.482 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.483 Delay0 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.483 [2024-11-19 10:58:34.146223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.483 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:44.741 [2024-11-19 10:58:34.272967] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:46.642 Initializing NVMe Controllers 00:30:46.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:46.642 controller IO queue size 128 less than required 00:30:46.642 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:46.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:46.642 Initialization complete. Launching workers. 00:30:46.642 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37928 00:30:46.642 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37985, failed to submit 66 00:30:46.642 success 37928, unsuccessful 57, failed 0 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:46.642 rmmod nvme_tcp 00:30:46.642 rmmod nvme_fabrics 00:30:46.642 rmmod nvme_keyring 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4096324 ']' 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4096324 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 4096324 ']' 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 4096324 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:46.642 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4096324 00:30:46.902 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:46.902 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:46.902 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4096324' 00:30:46.902 killing process with pid 4096324 00:30:46.902 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 4096324 00:30:46.902 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 4096324 00:30:46.902 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:46.902 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:46.902 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:46.902 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:46.902 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:46.903 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:46.903 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:46.903 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:46.903 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:46.903 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.903 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.903 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:49.440 00:30:49.440 real 0m11.683s 00:30:49.440 user 0m10.302s 00:30:49.440 sys 0m5.707s 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.440 ************************************ 00:30:49.440 END TEST nvmf_abort 00:30:49.440 ************************************ 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:49.440 ************************************ 00:30:49.440 START TEST nvmf_ns_hotplug_stress 00:30:49.440 ************************************ 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:49.440 * Looking for test storage... 00:30:49.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:49.440 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:49.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.441 --rc genhtml_branch_coverage=1 00:30:49.441 --rc genhtml_function_coverage=1 00:30:49.441 --rc genhtml_legend=1 00:30:49.441 --rc geninfo_all_blocks=1 00:30:49.441 --rc geninfo_unexecuted_blocks=1 00:30:49.441 00:30:49.441 ' 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:49.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.441 --rc genhtml_branch_coverage=1 00:30:49.441 --rc genhtml_function_coverage=1 00:30:49.441 --rc genhtml_legend=1 00:30:49.441 --rc geninfo_all_blocks=1 00:30:49.441 --rc geninfo_unexecuted_blocks=1 00:30:49.441 00:30:49.441 ' 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:49.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.441 --rc genhtml_branch_coverage=1 00:30:49.441 --rc genhtml_function_coverage=1 00:30:49.441 --rc genhtml_legend=1 00:30:49.441 --rc geninfo_all_blocks=1 00:30:49.441 --rc geninfo_unexecuted_blocks=1 00:30:49.441 00:30:49.441 ' 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:49.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.441 --rc genhtml_branch_coverage=1 00:30:49.441 --rc genhtml_function_coverage=1 00:30:49.441 --rc genhtml_legend=1 00:30:49.441 --rc geninfo_all_blocks=1 00:30:49.441 --rc geninfo_unexecuted_blocks=1 00:30:49.441 00:30:49.441 ' 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:49.441 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:49.442 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.015 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:56.016 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:56.016 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:56.016 Found net devices under 0000:86:00.0: cvl_0_0 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:56.016 Found net devices under 0000:86:00.1: cvl_0_1 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:56.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:30:56.016 00:30:56.016 --- 10.0.0.2 ping statistics --- 00:30:56.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.016 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:56.016 00:30:56.016 --- 10.0.0.1 ping statistics --- 00:30:56.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.016 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.016 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4100349 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4100349 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 4100349 ']' 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.017 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:56.017 [2024-11-19 10:58:44.940287] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:56.017 [2024-11-19 10:58:44.941184] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:30:56.017 [2024-11-19 10:58:44.941242] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.017 [2024-11-19 10:58:45.020152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:56.017 [2024-11-19 10:58:45.061432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.017 [2024-11-19 10:58:45.061468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.017 [2024-11-19 10:58:45.061475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.017 [2024-11-19 10:58:45.061481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.017 [2024-11-19 10:58:45.061487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.017 [2024-11-19 10:58:45.062848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:56.017 [2024-11-19 10:58:45.062956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.017 [2024-11-19 10:58:45.062957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:56.017 [2024-11-19 10:58:45.128715] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:56.017 [2024-11-19 10:58:45.129445] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:56.017 [2024-11-19 10:58:45.129790] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:56.017 [2024-11-19 10:58:45.129925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:56.017 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.017 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:56.017 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:56.017 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:56.017 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:56.017 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.017 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:56.017 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:56.017 [2024-11-19 10:58:45.363698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.017 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:56.017 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.017 [2024-11-19 10:58:45.748134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.017 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:56.276 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:56.535 Malloc0 00:30:56.536 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:56.536 Delay0 00:30:56.794 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.794 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:57.052 NULL1 00:30:57.052 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:57.310 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4100825 00:30:57.310 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:57.310 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:30:57.310 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.568 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.568 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:57.568 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:57.826 true 00:30:57.826 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:30:57.827 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.084 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.342 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:58.342 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:58.342 true 00:30:58.342 10:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:30:58.342 10:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.599 Read completed with error (sct=0, sc=11) 00:30:58.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.599 10:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.871 [2024-11-19 10:58:48.510085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.510965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.511996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.512044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.512101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.512146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.871 [2024-11-19 10:58:48.512192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.512245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.512295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.512348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.512393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.512435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.512487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.512529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.512574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.512624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.513960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.514982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.515873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.516047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.516091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.516123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.516167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.516215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.516266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.872 [2024-11-19 10:58:48.516637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.516689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.516736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.516779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.516828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.516875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.516920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.516975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.517965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.518986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.519994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.520038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.520526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.520572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.520616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.520658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.520703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.520747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.873 [2024-11-19 10:58:48.520789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.520827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.520863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.520900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.520944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.520994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.521987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.522984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.523983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.874 [2024-11-19 10:58:48.524538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.524582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.524626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.524675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.524724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.524771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.524814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.524874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.524918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.524963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.525959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.526006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.526056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.526104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.526153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.526209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.526254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.526300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.526825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.526874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.526916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.526957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.527969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.875 [2024-11-19 10:58:48.528720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.528773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.528818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.528867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.528913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.528963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.529613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.530511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.530558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.530599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.530645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.530693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.530738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.530792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.530838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.530885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.530931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.530973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.531949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.876 [2024-11-19 10:58:48.532533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.532578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.532618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.532660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.532704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.532745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.532790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.532835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.532878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.532926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.532969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.533984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.534986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.535998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.877 [2024-11-19 10:58:48.536639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.536682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.536721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.536769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.536813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.536857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.536906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.536946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.536985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.537980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.538990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.539809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.878 [2024-11-19 10:58:48.540974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.541957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.542966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.543971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.544014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.544061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.544097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.544144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.544192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.544815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.544871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.544925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.544978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.545986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.546018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.879 [2024-11-19 10:58:48.546062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 10:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:58.880 [2024-11-19 10:58:48.546379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 10:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:58.880 [2024-11-19 10:58:48.546677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.546984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.547964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.548982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.549994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.550593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.550638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.550671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.550714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.550756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.550796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.550836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.550878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.550919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.880 [2024-11-19 10:58:48.550960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.551966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.552976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.553956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.554003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.554052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.554103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:58.881 [2024-11-19 10:58:48.554147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.554196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.554251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.554299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.554923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.554980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.881 [2024-11-19 10:58:48.555574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.555622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.555668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.555704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.555751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.555793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.555836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.555880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.555931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.555972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.556964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.557965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.558991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.882 [2024-11-19 10:58:48.559973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.560995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.561997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.562980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.563978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.564018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.564053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.564098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.564158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.564213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.564264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.564311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.564359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.564987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.883 [2024-11-19 10:58:48.565646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.565692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.565741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.565789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.565839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.565888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.565936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.565982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.566996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.567848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.568990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.569998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.570045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.884 [2024-11-19 10:58:48.570093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.570875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.570922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.570970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.571962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.572960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.573958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.574984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.575017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.575058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.575101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.575141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.575183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.575229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.575274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.575314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.575347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.575391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.885 [2024-11-19 10:58:48.575433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.575992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.576718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.577972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.578970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.886 [2024-11-19 10:58:48.579845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.579886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.579933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.579978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.580024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.580057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.580098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.580137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.580966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.581969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.582983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.583958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.584987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.585031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.585086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.585132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.887 [2024-11-19 10:58:48.585178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.585998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.586975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.587963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.588984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.589996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.590046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.888 [2024-11-19 10:58:48.590102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.590161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.590216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.590266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.590315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.591990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.592951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.593903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.594985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.595027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.595061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.595103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.595138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.595176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.595221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.889 [2024-11-19 10:58:48.595264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.595980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.596867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.597982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.598969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.890 [2024-11-19 10:58:48.599818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.599868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.599915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.599965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.600024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.600074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.600126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.600176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.600226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.600274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.600318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.600359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.601980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.602957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.603981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.604985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.605035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.605087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.605133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.891 [2024-11-19 10:58:48.605184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.605967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.606986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.607019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.607058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.607101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.607139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:58.892 [2024-11-19 10:58:48.607910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.607955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.607996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.608976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.609967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.610019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.610068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.610122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.610170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.610224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.892 [2024-11-19 10:58:48.610271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.610963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.611992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.612976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.613854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.614599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.614644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.614700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.614739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.614779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.614823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.614868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.614910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.614949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.614992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.893 [2024-11-19 10:58:48.615536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.615581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.615634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.615681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.615729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.615775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.615820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.615870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.615922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.615969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.616981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.617965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.618997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.619982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.620026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.620074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.620124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.620174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.620230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.894 [2024-11-19 10:58:48.620281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.620332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.620382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.620426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.620471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.620516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.620559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.620602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.621960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.622959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.623960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.895 [2024-11-19 10:58:48.624835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.624867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.624908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.624939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.624978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.625964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.626973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.627022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.627064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.627107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.627148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.627196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.627249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.627290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.627332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.628980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.629969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.630021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.896 [2024-11-19 10:58:48.630071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.630923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.631952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.632984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.633976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.634024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.634068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.634876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.634922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.634960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.634998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.635039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.635079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.635122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.635152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.635199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.635241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.635282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.635326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.635367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.897 [2024-11-19 10:58:48.635409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.635990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.636964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.637990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.638955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.639976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.640021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.640071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.640125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.898 [2024-11-19 10:58:48.640174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.640828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.641982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.642962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.643962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.644006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.644049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.644092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.644139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.644681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.644728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.644769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.644809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.644850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.644901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.644949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.899 [2024-11-19 10:58:48.645568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.645617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.645660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.645706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.645755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.645798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.645843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.645892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.645938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.645984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.646980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.647016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.647058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.647105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.647148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.647189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.647238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.647282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.647322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.647362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.647404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.647447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.648973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.649968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.650008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.650050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.650092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.900 [2024-11-19 10:58:48.650133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:58.901 [2024-11-19 10:58:48.650781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.650820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.650863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.650901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.650939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.650977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.651878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.652964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.185 [2024-11-19 10:58:48.653597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.653645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.653694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.653747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.653792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.653834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.653886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.653930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.653975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.654996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.655990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.656998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.657981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.186 [2024-11-19 10:58:48.658528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.658577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.658627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.658672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.658726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.658768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.658813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.658860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.658903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.658945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.658978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.659999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.187 [2024-11-19 10:58:48.660776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.660988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.661036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.661078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.661116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.661160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.661196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.661242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.661280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.661319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.661361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.661400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.662958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.187 [2024-11-19 10:58:48.663683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.663730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.663778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.663824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.663868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.663911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.663948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.663996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.664980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.665989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.666994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.667044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.667089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.667133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.667177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.667223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.667852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.667901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.667944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.667984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.188 [2024-11-19 10:58:48.668757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.668804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.668849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.668895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.668945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.668992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.669960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.670831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.671670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.672970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.189 [2024-11-19 10:58:48.673562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.673610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.673657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.673701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.673754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.673801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.673846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.673894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.673938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.673986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.674976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.675957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.676992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.677602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.678317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.678360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.678400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.678443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.190 [2024-11-19 10:58:48.678484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.678972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.679962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.680958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.681973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.682808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.683261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.683321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.683367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.683412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.191 [2024-11-19 10:58:48.683462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.683512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.683557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.683607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.683653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.683699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.683741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.683783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.683827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.683870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.683913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.683955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.684971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.685967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.686956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.687544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.688270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.688320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.688366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.688411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.688459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.688505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.688558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.688606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.192 [2024-11-19 10:58:48.688654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.688703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.688752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.688799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.688851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.688900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.688941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.688984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.689980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.690968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.691977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.692746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.693208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.693268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.193 [2024-11-19 10:58:48.693320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.693968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.694971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.695994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.696983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.697034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.697084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.697135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.697183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.697234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.697284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.194 [2024-11-19 10:58:48.697330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.697379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.697425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.697473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.697526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.697582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.697627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.697663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.698974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.699996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.700989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.701964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.195 [2024-11-19 10:58:48.702651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.702694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.702744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.702787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.702819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.702863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.702910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.702951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.702993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.703986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.704997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.705970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.706966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.196 [2024-11-19 10:58:48.707623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.707665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.707708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.707748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.707792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.708557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.708608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.708657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.708711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.708759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.708806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.708852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.708904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.708949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.708998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.709967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.710959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.711982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.197 [2024-11-19 10:58:48.712167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.197 [2024-11-19 10:58:48.712716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.712762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.712815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.712862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.712910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.712957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.713957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.714482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.715959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.716997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.198 [2024-11-19 10:58:48.717857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.717896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.717937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.717981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.718020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.718060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.718100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.718142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.718192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.718712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.718761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.718810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.718863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.718909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.718961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.719982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.720954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.721987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.722041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.722087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.722134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.722184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.722239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.199 [2024-11-19 10:58:48.722286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.722962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.723992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.724971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.725973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.726966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.727015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.200 [2024-11-19 10:58:48.727057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.727993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.728043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.728096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.728143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.728197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.728942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.728987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.729995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.730981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.731825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.732023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.732070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.732119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.732163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.732215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.732260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.732305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.201 [2024-11-19 10:58:48.732347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.732968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.733972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.734844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 true 00:30:59.202 [2024-11-19 10:58:48.735943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.735985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.736967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.737022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.737069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.202 [2024-11-19 10:58:48.737116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.737952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.738988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.739964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.740979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.741026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.741072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.741134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.741180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.741232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.741278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.741325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.741376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.741431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.203 [2024-11-19 10:58:48.741478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.741523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.741569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.742974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.743986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.744823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.745645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.745695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.745740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.745796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.745850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.745898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.745945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.745990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.204 [2024-11-19 10:58:48.746917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.746968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.747972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 10:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:30:59.205 [2024-11-19 10:58:48.748060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 10:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.205 [2024-11-19 10:58:48.748528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.748990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.749997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.750973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.751006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.751046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.751090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.751136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.751180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.751232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.751280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.751330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.751377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.751430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.205 [2024-11-19 10:58:48.751478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.751524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.751573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.751621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.751672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.751717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.751768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.751816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.751862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.751909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.751961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.752992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.753959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.754984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.755756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.755810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.755856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.755903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.755959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.206 [2024-11-19 10:58:48.756608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.756657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.756699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.756741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.756776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.756814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.756856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.756898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.756948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.756989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.757955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.758976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.759983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.207 [2024-11-19 10:58:48.760990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.761575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.762997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.763988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.764984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.208 [2024-11-19 10:58:48.765753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.208 [2024-11-19 10:58:48.765800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.765843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.765887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.765934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.765980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.766963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.767954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.768981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.769986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.209 [2024-11-19 10:58:48.770878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.770928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.770975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.771962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.772836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.773970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.774957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.775004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.210 [2024-11-19 10:58:48.775057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.775097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.775142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.775875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.775929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.775963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.776958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.777977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.778990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.779996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.780039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.780088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.211 [2024-11-19 10:58:48.780141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.780189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.780243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.780290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.780337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.780781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.780831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.780876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.780920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.780962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.780996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.781972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.782958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.783979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.784974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.212 [2024-11-19 10:58:48.785017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.785056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.785098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.785145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.785189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.785887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.785929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.785973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.786974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.787976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.788985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.789030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.789084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.789131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.789180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.789233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.789284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.789331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.213 [2024-11-19 10:58:48.789379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.789952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.790964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.791756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.792966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.793951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.794005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.794050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.214 [2024-11-19 10:58:48.794096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.794997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.795835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.795889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.795938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.795991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.796972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.797969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.798995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.799035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.799075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.799119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.799156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.799197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.799244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.799291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.215 [2024-11-19 10:58:48.799333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.799988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.800991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.801724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.802983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.803964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.804009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.804059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.804105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.804148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.804194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.804244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.804291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.804337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.804388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.216 [2024-11-19 10:58:48.804443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.804971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.805014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.805057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.805097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.805133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.805176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.805222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.805267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.806977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.807973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.808968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.809010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.809058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.809106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.809155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.809200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.809250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.809302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.809344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.809389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.809432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.217 [2024-11-19 10:58:48.809480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.809533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.809581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.809626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.809673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.810971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.811996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.812923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.218 [2024-11-19 10:58:48.813543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.813589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.813635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.813674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.813716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.813759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.813807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.813852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.813902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.813951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.814976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.815018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.815066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.815109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.815145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.815737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.815792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.815840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.815887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.815932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.815975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.816934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.817993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.219 [2024-11-19 10:58:48.818610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.818651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.818690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.818729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.818907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.220 [2024-11-19 10:58:48.818956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.818989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.819556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.820974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.821977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.822972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.823017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.823208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.823261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.823317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.823362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.823407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.220 [2024-11-19 10:58:48.823450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.823497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.823546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.823594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.823642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.823689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.823739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.823787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.823831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.823879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.823924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.823971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.824968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.825005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.825045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.825085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.825129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.825182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.825234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.825276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.825320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.825359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.826959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.827995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.221 [2024-11-19 10:58:48.828549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.828597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.828643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.828686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.828738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.828797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.828846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.828892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.828940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.829958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.830590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.831982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.222 [2024-11-19 10:58:48.832886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.832931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.832974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.833869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.834957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.835007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.835054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.835102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.835146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.835196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.835248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.835295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.835345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.835390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.835432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.836975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.837996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.838040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.223 [2024-11-19 10:58:48.838088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.838981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.839971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.840967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.841015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.841063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.841117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.841165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.841217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.841270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.841317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.841360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.841404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.841897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.841950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.224 [2024-11-19 10:58:48.842956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.842995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.843963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.844840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.845607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.846971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.847989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.848032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.848073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.848107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.225 [2024-11-19 10:58:48.848149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.848972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.849993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.850996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.851976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.852497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.852552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.852599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.852646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.852698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.852752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.852799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.852850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.852899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.852949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.852998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.853045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.853094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.853143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.226 [2024-11-19 10:58:48.853188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.853968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.854997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.855048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.855095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.855145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.855199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.855251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.855296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.855340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.856977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.227 [2024-11-19 10:58:48.857743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.857780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.857826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.857862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.857902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.857944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.857988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.858953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.859989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.860971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.861897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.862408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.862461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.862502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.862534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.862574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.862616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.228 [2024-11-19 10:58:48.862655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.862704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.862743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.862788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.862831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.862874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.862917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.862957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.862995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.863990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.864970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.865025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.865072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.865119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.865164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.865205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.865248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.865297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.866966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.867990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.868041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.868088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.229 [2024-11-19 10:58:48.868136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.868982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.869850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 Message suppressed 999 times: [2024-11-19 10:58:48.870409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 Read completed with error (sct=0, sc=15) 00:30:59.230 [2024-11-19 10:58:48.870461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.870958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.871981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.230 [2024-11-19 10:58:48.872031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.872978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.873020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.873061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.873102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.873145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.873182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.873780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.873832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.873877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.873920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.873972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.874972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.875998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.231 [2024-11-19 10:58:48.876995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.877988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.878953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.879967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.232 [2024-11-19 10:58:48.880926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.881986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.882967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.883018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.883071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.883117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.883679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.883729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.883775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.883820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.883875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.883934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.883980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.884956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.233 [2024-11-19 10:58:48.885576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.885623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.885668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.885711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.885759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.885813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.885860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.885907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.885953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.886999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.887053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.887095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.887137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.887178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.887227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.887266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.887319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.887365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.887407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.887451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.887493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.888967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.889990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.890040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.890087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.890133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.890185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.890237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.890284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.890331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.890384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.234 [2024-11-19 10:58:48.890431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.890971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.891964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.892982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.893040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.893087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.893138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.893186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.893235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.893291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.893341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.894990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.235 [2024-11-19 10:58:48.895549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.895596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.895644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.895691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.895739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.895788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.895841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.895894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.895942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.895990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.896970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.897994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.898618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.899956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.236 [2024-11-19 10:58:48.900003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.900981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.901968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.902962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.903547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.904289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.904336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.904381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.904424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.904471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.904519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.237 [2024-11-19 10:58:48.904562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.904615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.904658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.904694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.904735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.904780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.904823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.904867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.904912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.904954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.904995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.905954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.906989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.907969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.238 [2024-11-19 10:58:48.908687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.908726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.908765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.908810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.908861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.908907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.908962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.909959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.910010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.910054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.910094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.910134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.910179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.910233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.910277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.910838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.910889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.910936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.910983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.911978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.912996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.239 [2024-11-19 10:58:48.913702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.913751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.913795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.913844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.914709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.914746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.914783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.914826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.914858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.914897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.914946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.914994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.915983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.916957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.917955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.918963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.240 [2024-11-19 10:58:48.919529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.919570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.919612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.919657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.919701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.919744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.919789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.919827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.919869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.919910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.919947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.919991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.920970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.921969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.922983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.923022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.923069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.923111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.923158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.923209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.923258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.923303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.923347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.241 [2024-11-19 10:58:48.923379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.923984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.924026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.242 [2024-11-19 10:58:48.924860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.924911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.924960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.925963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.926986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.242 [2024-11-19 10:58:48.927487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.927533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.927573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.927610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.927653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.927843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.927891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.927941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.927988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.928973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.929962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.243 [2024-11-19 10:58:48.930685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.515 10:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:59.515 [2024-11-19 10:58:49.169941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.170985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.171975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.172981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.173959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.174001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.174042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.174085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.174123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.174161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.174207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.174237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.174280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.174940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.174990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.175034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.175084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.175129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.175177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.175232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.175278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.175326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.175367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.175416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.175465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.515 [2024-11-19 10:58:49.175511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.175565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.175608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.175651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.175702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.175747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.175804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.175846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.175890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.175940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.175986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.176988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.177962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.178992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.179040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.179084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.179132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.179179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.179230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.179833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.179883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.179928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.179977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.180966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.181991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.516 [2024-11-19 10:58:49.182047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.182979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.183973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.184585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.184646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.184692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.184737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.184784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.184828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.184871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.184920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.184965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.185996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.186956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.517 [2024-11-19 10:58:49.187864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.187967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.517 [2024-11-19 10:58:49.188660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.188700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.188738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.188777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.188822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.188863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.189996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.190981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.191981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.192990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.193972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.194012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.194589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.194637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.194676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.194716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.194757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.194804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.194860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.194910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.194960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.195006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.195054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.195105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.195160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.195209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.518 [2024-11-19 10:58:49.195261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.195985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.196986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.197985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.198976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.199021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.199624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.199678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.199727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.199778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.199829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.199875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.199919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.199963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.200986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.201953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.202003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.519 [2024-11-19 10:58:49.202051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.202981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.203996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.204039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.204077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.204731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.204785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.204830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.204874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.204931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.204978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.205985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.206990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.207966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.208980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.209027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.209074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.209121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.520 [2024-11-19 10:58:49.209722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.209774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.209820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.209866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.209910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.209949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.209999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.210977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 10:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:59.521 [2024-11-19 10:58:49.211792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.211984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 10:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:59.521 [2024-11-19 10:58:49.212060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.212960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.213976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.214022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.214073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.214119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.214170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.214759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.214806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.214849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.214896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.214941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.214988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.521 [2024-11-19 10:58:49.215746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.215804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.215852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.215903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.215949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.215999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.216958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.217983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.218970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.219013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.219052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.219096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.219743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.219798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.219848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.219898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.219947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.219995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.220964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.221962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.522 [2024-11-19 10:58:49.222521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.222572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.222621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.222807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.222859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.222906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.222954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.223976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.224028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.224076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.224119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.224158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.224200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.224244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.224293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.224332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.224898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.224948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.224990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.225949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.226970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.227984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.228966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.229013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.229052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.229097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.229142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.229186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.229787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.229839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.229892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.229953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.229994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.230039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.230081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.230125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.230172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.523 [2024-11-19 10:58:49.230224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.230986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.231978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.232980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.233974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.234028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.234074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.234111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.234145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.234185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.234230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.234792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.234840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.234881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.234919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.234965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.235966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.524 [2024-11-19 10:58:49.236555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.236602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.236650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.236697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.236742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.236795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.236846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.236892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.236942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.236990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.237997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.238978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.239016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.239054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.239625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.239683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.239734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.239777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.239820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.239867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.525 [2024-11-19 10:58:49.239915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.239963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.240996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.241990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.242993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.243040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.525 [2024-11-19 10:58:49.243084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.243965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.244981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.245986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.246988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.247949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.248976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.249569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.249619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.249653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.249695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.249737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.249774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.249816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.249863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.249905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.249954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.249992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.250034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.250085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.250124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.250157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.250213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.250254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.250295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.526 [2024-11-19 10:58:49.250334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.250969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.251979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.252986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.253910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.254509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.254564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.254618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.254668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.254715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.254761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.254804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.254855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.254907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.254954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.255991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.256977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.257021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.257062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.527 [2024-11-19 10:58:49.257109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.257973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.258923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.259973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.260956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.261968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.262984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.528 [2024-11-19 10:58:49.263685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.263727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.263771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.263814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.263861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.263904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.264505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.264564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.264613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.264662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.264706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.264755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.264807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.264852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.264901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.264947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.264989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.265994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.266969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.267990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.268952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.269966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.270973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.529 [2024-11-19 10:58:49.271914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.271962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.272957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.273967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.274572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.274628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.274678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.274724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.274770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.274816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.274868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.274914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.274964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.275998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.276964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.277952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.278985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.279034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.279630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.279670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.279710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.279747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.279788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.279830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.279876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.279924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.279964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.280007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.280058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.280105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.280140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.530 [2024-11-19 10:58:49.280184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.280996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.281964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.282978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.283935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.284536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.284582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.284620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.284670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.284715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.284757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.284795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.284832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.284886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.284929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.284976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.285969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.531 [2024-11-19 10:58:49.286941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.286989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.287994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.288860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.289954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.532 [2024-11-19 10:58:49.290816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.789 [2024-11-19 10:58:49.290853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.789 [2024-11-19 10:58:49.290898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:59.789 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:59.789 true 00:30:59.789 10:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:30:59.789 10:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.724 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.981 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:00.981 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:00.981 true 00:31:00.981 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:00.981 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.239 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.496 10:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:01.496 10:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:01.754 true 00:31:01.754 10:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:01.754 10:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:02.686 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:02.947 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:02.947 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:03.274 true 00:31:03.274 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:03.274 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.274 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.592 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:03.592 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:03.850 true 00:31:03.850 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:03.851 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.783 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.783 10:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.783 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.783 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:05.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:05.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:05.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:05.042 10:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:05.042 10:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:05.300 true 00:31:05.300 10:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:05.300 10:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.232 10:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.232 10:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:06.232 10:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:06.489 true 00:31:06.489 10:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:06.489 10:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.747 10:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.747 10:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:06.747 10:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:07.005 true 00:31:07.005 10:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:07.005 10:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.378 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.378 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:08.378 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:08.378 true 00:31:08.636 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:08.636 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.201 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:09.461 10:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:09.461 10:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:09.721 true 00:31:09.721 10:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:09.721 10:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.979 10:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.237 10:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:10.237 10:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:10.237 true 00:31:10.494 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:10.495 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.428 10:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.686 10:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:11.686 10:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:11.686 true 00:31:11.686 10:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:11.686 10:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.944 10:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.201 10:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:12.201 10:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:12.201 true 00:31:12.458 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:12.458 10:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:13.830 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:13.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:13.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:13.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:13.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:13.830 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:13.830 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:14.088 true 00:31:14.088 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:14.088 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:15.021 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:15.021 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:15.021 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:15.279 true 00:31:15.279 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:15.279 10:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.536 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.536 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:15.536 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:15.794 true 00:31:15.794 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:15.794 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.164 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.164 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:17.164 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:17.421 true 00:31:17.421 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:17.421 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.353 10:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.611 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:18.611 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:18.611 true 00:31:18.611 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:18.611 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.869 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.126 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:19.126 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:19.383 true 00:31:19.383 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:19.383 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.572 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:20.573 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:20.830 true 00:31:20.830 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:20.830 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.762 10:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.762 10:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:21.762 10:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:22.019 true 00:31:22.019 10:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:22.019 10:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.276 10:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.276 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:22.533 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:22.533 true 00:31:22.533 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:22.534 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.906 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:23.906 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:23.906 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:24.164 true 00:31:24.164 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:24.164 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.098 10:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.098 10:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:25.098 10:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:25.356 true 00:31:25.356 10:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:25.356 10:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.614 10:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.614 10:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:25.614 10:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:25.872 true 00:31:25.872 10:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:25.872 10:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.243 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.243 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:27.243 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:27.500 true 00:31:27.500 10:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:27.500 10:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.434 Initializing NVMe Controllers 00:31:28.434 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.434 Controller IO queue size 128, less than required. 00:31:28.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.434 Controller IO queue size 128, less than required. 00:31:28.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:28.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:28.434 Initialization complete. Launching workers. 00:31:28.434 ======================================================== 00:31:28.434 Latency(us) 00:31:28.434 Device Information : IOPS MiB/s Average min max 00:31:28.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2554.59 1.25 34038.84 1989.70 1178000.69 00:31:28.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17311.31 8.45 7371.18 1549.90 369478.82 00:31:28.434 ======================================================== 00:31:28.434 Total : 19865.90 9.70 10800.43 1549.90 1178000.69 00:31:28.434 00:31:28.434 10:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.434 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:28.434 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:28.692 true 00:31:28.692 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4100825 00:31:28.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4100825) - No such process 00:31:28.692 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4100825 00:31:28.692 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.962 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:29.227 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:29.227 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:29.227 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:29.227 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:29.227 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:29.227 null0 00:31:29.227 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:29.227 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:29.227 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:29.485 null1 00:31:29.485 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:29.485 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:29.485 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:29.744 null2 00:31:29.744 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:29.744 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:29.744 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:29.744 null3 00:31:29.744 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:29.744 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:29.744 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:30.004 null4 00:31:30.004 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:30.004 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:30.004 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:30.262 null5 00:31:30.262 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:30.262 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:30.262 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:30.262 null6 00:31:30.262 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:30.262 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:30.262 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:30.524 null7 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:30.524 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4106172 4106173 4106175 4106177 4106179 4106181 4106183 4106185 00:31:30.525 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:30.783 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.783 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:30.783 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:30.783 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:30.783 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:30.783 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:30.783 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:30.783 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.042 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:31.302 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.302 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:31.302 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:31.302 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:31.302 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:31.302 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:31.302 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:31.302 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.302 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:31.560 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.561 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:31.561 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:31.561 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:31.561 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:31.561 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:31.561 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:31.561 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.819 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:32.078 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:32.078 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:32.078 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:32.078 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:32.078 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:32.078 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.078 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:32.078 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.338 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:32.338 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:32.338 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:32.338 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.338 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:32.338 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:32.338 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:32.338 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:32.338 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:32.597 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:32.856 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:32.856 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:32.856 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:32.856 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.856 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:32.856 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:32.856 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:32.856 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.115 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:33.374 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:33.374 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:33.374 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:33.374 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:33.374 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:33.374 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.374 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:33.374 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.374 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.375 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:33.375 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.375 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.375 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:33.633 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:33.633 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:33.633 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:33.633 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:33.633 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:33.633 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.633 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:33.633 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.892 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:34.151 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.151 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:34.151 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:34.151 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:34.151 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:34.151 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:34.151 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:34.151 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.410 10:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:34.410 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:34.410 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:34.410 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:34.410 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:34.410 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:34.410 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:34.410 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:34.410 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:34.669 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:34.670 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:34.670 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:34.670 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:34.670 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:34.670 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:34.670 rmmod nvme_tcp 00:31:34.670 rmmod nvme_fabrics 00:31:34.670 rmmod nvme_keyring 00:31:34.670 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:34.928 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:34.928 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:34.928 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4100349 ']' 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4100349 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 4100349 ']' 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 4100349 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4100349 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4100349' 00:31:34.929 killing process with pid 4100349 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 4100349 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 4100349 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.929 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.512 10:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.512 00:31:37.512 real 0m48.008s 00:31:37.512 user 3m1.323s 00:31:37.512 sys 0m20.848s 00:31:37.512 10:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:37.512 10:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:37.512 ************************************ 00:31:37.512 END TEST nvmf_ns_hotplug_stress 00:31:37.512 ************************************ 00:31:37.512 10:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:37.512 10:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:37.512 10:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:37.512 10:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:37.512 ************************************ 00:31:37.512 START TEST nvmf_delete_subsystem 00:31:37.512 ************************************ 00:31:37.512 10:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:37.512 * Looking for test storage... 00:31:37.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:37.512 10:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:37.512 10:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:31:37.512 10:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:37.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.512 --rc genhtml_branch_coverage=1 00:31:37.512 --rc genhtml_function_coverage=1 00:31:37.512 --rc genhtml_legend=1 00:31:37.512 --rc geninfo_all_blocks=1 00:31:37.512 --rc geninfo_unexecuted_blocks=1 00:31:37.512 00:31:37.512 ' 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:37.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.512 --rc genhtml_branch_coverage=1 00:31:37.512 --rc genhtml_function_coverage=1 00:31:37.512 --rc genhtml_legend=1 00:31:37.512 --rc geninfo_all_blocks=1 00:31:37.512 --rc geninfo_unexecuted_blocks=1 00:31:37.512 00:31:37.512 ' 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:37.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.512 --rc genhtml_branch_coverage=1 00:31:37.512 --rc genhtml_function_coverage=1 00:31:37.512 --rc genhtml_legend=1 00:31:37.512 --rc geninfo_all_blocks=1 00:31:37.512 --rc geninfo_unexecuted_blocks=1 00:31:37.512 00:31:37.512 ' 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:37.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.512 --rc genhtml_branch_coverage=1 00:31:37.512 --rc genhtml_function_coverage=1 00:31:37.512 --rc genhtml_legend=1 00:31:37.512 --rc geninfo_all_blocks=1 00:31:37.512 --rc geninfo_unexecuted_blocks=1 00:31:37.512 00:31:37.512 ' 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:37.512 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:37.513 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:44.084 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:44.084 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:44.084 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:44.085 Found net devices under 0000:86:00.0: cvl_0_0 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:44.085 Found net devices under 0000:86:00.1: cvl_0_1 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:44.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:31:44.085 00:31:44.085 --- 10.0.0.2 ping statistics --- 00:31:44.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.085 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:31:44.085 00:31:44.085 --- 10.0.0.1 ping statistics --- 00:31:44.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.085 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4110546 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4110546 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4110546 ']' 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:44.085 10:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:44.085 [2024-11-19 10:59:33.026896] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:44.085 [2024-11-19 10:59:33.027825] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:31:44.085 [2024-11-19 10:59:33.027860] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.085 [2024-11-19 10:59:33.106297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:44.085 [2024-11-19 10:59:33.144074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.085 [2024-11-19 10:59:33.144111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.085 [2024-11-19 10:59:33.144118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.085 [2024-11-19 10:59:33.144124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.085 [2024-11-19 10:59:33.144130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.085 [2024-11-19 10:59:33.145350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.085 [2024-11-19 10:59:33.145351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.086 [2024-11-19 10:59:33.212216] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:44.086 [2024-11-19 10:59:33.212715] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:44.086 [2024-11-19 10:59:33.212957] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:44.086 [2024-11-19 10:59:33.286151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:44.086 [2024-11-19 10:59:33.314598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:44.086 NULL1 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:44.086 Delay0 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4110568 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:44.086 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:44.086 [2024-11-19 10:59:33.427799] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:45.988 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:45.988 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.988 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.988 Read completed with error (sct=0, sc=8) 00:31:45.988 Read completed with error (sct=0, sc=8) 00:31:45.988 starting I/O failed: -6 00:31:45.988 Read completed with error (sct=0, sc=8) 00:31:45.988 Write completed with error (sct=0, sc=8) 00:31:45.988 Read completed with error (sct=0, sc=8) 00:31:45.988 Read completed with error (sct=0, sc=8) 00:31:45.988 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 [2024-11-19 10:59:35.544647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e84a0 is same with the state(6) to be set 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 Write completed with error (sct=0, sc=8) 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 starting I/O failed: -6 00:31:45.989 Read completed with error (sct=0, sc=8) 00:31:45.989 [2024-11-19 10:59:35.545693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f620800d4b0 is same with the state(6) to be set 00:31:46.926 [2024-11-19 10:59:36.522287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e99a0 is same with the state(6) to be set 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 [2024-11-19 10:59:36.546033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f620800d7e0 is same with the state(6) to be set 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 [2024-11-19 10:59:36.546344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6208000c40 is same with the state(6) to be set 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 [2024-11-19 10:59:36.546495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f620800d020 is same with the state(6) to be set 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Write completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 Read completed with error (sct=0, sc=8) 00:31:46.926 [2024-11-19 10:59:36.548471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8680 is same with the state(6) to be set 00:31:46.926 Initializing NVMe Controllers 00:31:46.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:46.926 Controller IO queue size 128, less than required. 00:31:46.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:46.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:46.926 Initialization complete. Launching workers. 00:31:46.926 ======================================================== 00:31:46.926 Latency(us) 00:31:46.926 Device Information : IOPS MiB/s Average min max 00:31:46.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.12 0.08 865674.36 255.63 1008062.79 00:31:46.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.15 0.08 1209784.23 2462.17 2001544.22 00:31:46.926 ======================================================== 00:31:46.926 Total : 312.27 0.15 1035537.51 255.63 2001544.22 00:31:46.926 00:31:46.926 [2024-11-19 10:59:36.548789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e99a0 (9): Bad file descriptor 00:31:46.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:46.926 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.926 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:46.926 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4110568 00:31:46.926 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4110568 00:31:47.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4110568) - No such process 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4110568 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4110568 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4110568 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:47.495 [2024-11-19 10:59:37.082427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.495 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.496 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:47.496 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.496 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4111252 00:31:47.496 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:47.496 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:47.496 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4111252 00:31:47.496 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:47.496 [2024-11-19 10:59:37.165570] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:48.062 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:48.062 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4111252 00:31:48.062 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:48.320 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:48.320 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4111252 00:31:48.320 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:48.887 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:48.887 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4111252 00:31:48.887 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:49.454 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:49.454 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4111252 00:31:49.454 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:50.020 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:50.020 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4111252 00:31:50.020 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:50.587 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:50.587 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4111252 00:31:50.587 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:50.587 Initializing NVMe Controllers 00:31:50.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:50.587 Controller IO queue size 128, less than required. 00:31:50.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:50.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:50.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:50.587 Initialization complete. Launching workers. 00:31:50.587 ======================================================== 00:31:50.587 Latency(us) 00:31:50.587 Device Information : IOPS MiB/s Average min max 00:31:50.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002156.38 1000157.70 1006700.70 00:31:50.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003905.71 1000287.64 1010124.33 00:31:50.587 ======================================================== 00:31:50.587 Total : 256.00 0.12 1003031.05 1000157.70 1010124.33 00:31:50.587 00:31:50.846 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:50.846 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4111252 00:31:50.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4111252) - No such process 00:31:50.846 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4111252 00:31:50.846 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:50.846 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:50.846 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.846 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:50.846 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.846 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:50.846 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.846 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:51.105 rmmod nvme_tcp 00:31:51.105 rmmod nvme_fabrics 00:31:51.105 rmmod nvme_keyring 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4110546 ']' 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4110546 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4110546 ']' 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4110546 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4110546 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4110546' 00:31:51.105 killing process with pid 4110546 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4110546 00:31:51.105 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4110546 00:31:51.364 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:51.365 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:51.365 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:51.365 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:51.365 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:51.365 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:51.365 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:51.365 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:51.365 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:51.365 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.365 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.365 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.271 10:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:53.271 00:31:53.271 real 0m16.142s 00:31:53.271 user 0m25.963s 00:31:53.271 sys 0m6.126s 00:31:53.271 10:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.271 10:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:53.271 ************************************ 00:31:53.271 END TEST nvmf_delete_subsystem 00:31:53.271 ************************************ 00:31:53.271 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:53.271 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:53.271 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.271 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:53.271 ************************************ 00:31:53.271 START TEST nvmf_host_management 00:31:53.271 ************************************ 00:31:53.271 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:53.531 * Looking for test storage... 00:31:53.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:53.531 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:53.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.532 --rc genhtml_branch_coverage=1 00:31:53.532 --rc genhtml_function_coverage=1 00:31:53.532 --rc genhtml_legend=1 00:31:53.532 --rc geninfo_all_blocks=1 00:31:53.532 --rc geninfo_unexecuted_blocks=1 00:31:53.532 00:31:53.532 ' 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:53.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.532 --rc genhtml_branch_coverage=1 00:31:53.532 --rc genhtml_function_coverage=1 00:31:53.532 --rc genhtml_legend=1 00:31:53.532 --rc geninfo_all_blocks=1 00:31:53.532 --rc geninfo_unexecuted_blocks=1 00:31:53.532 00:31:53.532 ' 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:53.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.532 --rc genhtml_branch_coverage=1 00:31:53.532 --rc genhtml_function_coverage=1 00:31:53.532 --rc genhtml_legend=1 00:31:53.532 --rc geninfo_all_blocks=1 00:31:53.532 --rc geninfo_unexecuted_blocks=1 00:31:53.532 00:31:53.532 ' 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:53.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.532 --rc genhtml_branch_coverage=1 00:31:53.532 --rc genhtml_function_coverage=1 00:31:53.532 --rc genhtml_legend=1 00:31:53.532 --rc geninfo_all_blocks=1 00:31:53.532 --rc geninfo_unexecuted_blocks=1 00:31:53.532 00:31:53.532 ' 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.532 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.533 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.533 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:53.533 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:53.533 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:53.533 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:00.210 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:00.210 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.210 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:00.211 Found net devices under 0000:86:00.0: cvl_0_0 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:00.211 Found net devices under 0000:86:00.1: cvl_0_1 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.211 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:32:00.211 00:32:00.211 --- 10.0.0.2 ping statistics --- 00:32:00.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.211 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:32:00.211 00:32:00.211 --- 10.0.0.1 ping statistics --- 00:32:00.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.211 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4115247 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4115247 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4115247 ']' 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.211 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.211 [2024-11-19 10:59:49.187002] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.211 [2024-11-19 10:59:49.187946] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:32:00.211 [2024-11-19 10:59:49.187982] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.211 [2024-11-19 10:59:49.265991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.211 [2024-11-19 10:59:49.308182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.211 [2024-11-19 10:59:49.308223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.211 [2024-11-19 10:59:49.308231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.211 [2024-11-19 10:59:49.308237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.211 [2024-11-19 10:59:49.308241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.211 [2024-11-19 10:59:49.309783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.211 [2024-11-19 10:59:49.309899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.211 [2024-11-19 10:59:49.310007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.211 [2024-11-19 10:59:49.310008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:00.211 [2024-11-19 10:59:49.376617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.211 [2024-11-19 10:59:49.377240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.211 [2024-11-19 10:59:49.377571] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:00.212 [2024-11-19 10:59:49.377982] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:00.212 [2024-11-19 10:59:49.378028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.212 [2024-11-19 10:59:49.450718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.212 Malloc0 00:32:00.212 [2024-11-19 10:59:49.539003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4115297 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4115297 /var/tmp/bdevperf.sock 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4115297 ']' 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:00.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.212 { 00:32:00.212 "params": { 00:32:00.212 "name": "Nvme$subsystem", 00:32:00.212 "trtype": "$TEST_TRANSPORT", 00:32:00.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.212 "adrfam": "ipv4", 00:32:00.212 "trsvcid": "$NVMF_PORT", 00:32:00.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.212 "hdgst": ${hdgst:-false}, 00:32:00.212 "ddgst": ${ddgst:-false} 00:32:00.212 }, 00:32:00.212 "method": "bdev_nvme_attach_controller" 00:32:00.212 } 00:32:00.212 EOF 00:32:00.212 )") 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:00.212 "params": { 00:32:00.212 "name": "Nvme0", 00:32:00.212 "trtype": "tcp", 00:32:00.212 "traddr": "10.0.0.2", 00:32:00.212 "adrfam": "ipv4", 00:32:00.212 "trsvcid": "4420", 00:32:00.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:00.212 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:00.212 "hdgst": false, 00:32:00.212 "ddgst": false 00:32:00.212 }, 00:32:00.212 "method": "bdev_nvme_attach_controller" 00:32:00.212 }' 00:32:00.212 [2024-11-19 10:59:49.636081] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:32:00.212 [2024-11-19 10:59:49.636133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4115297 ] 00:32:00.212 [2024-11-19 10:59:49.711634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.212 [2024-11-19 10:59:49.752416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.212 Running I/O for 10 seconds... 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.212 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=113 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 113 -ge 100 ']' 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.473 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.473 [2024-11-19 10:59:50.061560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.473 [2024-11-19 10:59:50.061977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.473 [2024-11-19 10:59:50.061984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.061992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.061998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.474 [2024-11-19 10:59:50.062476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.474 [2024-11-19 10:59:50.062484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.475 [2024-11-19 10:59:50.062490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.475 [2024-11-19 10:59:50.062498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.475 [2024-11-19 10:59:50.062505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.475 [2024-11-19 10:59:50.062513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.475 [2024-11-19 10:59:50.062519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.475 [2024-11-19 10:59:50.062527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.475 [2024-11-19 10:59:50.062534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.475 [2024-11-19 10:59:50.062543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.475 [2024-11-19 10:59:50.062549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.475 [2024-11-19 10:59:50.062646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:00.475 [2024-11-19 10:59:50.062657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.475 [2024-11-19 10:59:50.062664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:00.475 [2024-11-19 10:59:50.062673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.475 [2024-11-19 10:59:50.062680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:00.475 [2024-11-19 10:59:50.062687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.475 [2024-11-19 10:59:50.062694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:00.475 [2024-11-19 10:59:50.062701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.475 [2024-11-19 10:59:50.062707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9500 is same with the state(6) to be set 00:32:00.475 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.475 [2024-11-19 10:59:50.063567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:00.475 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:00.475 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.475 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.475 task offset: 24576 on job bdev=Nvme0n1 fails 00:32:00.475 00:32:00.475 Latency(us) 00:32:00.475 [2024-11-19T09:59:50.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.475 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:00.475 Job: Nvme0n1 ended in about 0.12 seconds with error 00:32:00.475 Verification LBA range: start 0x0 length 0x400 00:32:00.475 Nvme0n1 : 0.12 1661.68 103.85 553.89 0.00 26748.71 1568.18 27213.04 00:32:00.475 [2024-11-19T09:59:50.267Z] =================================================================================================================== 00:32:00.475 [2024-11-19T09:59:50.267Z] Total : 1661.68 103.85 553.89 0.00 26748.71 1568.18 27213.04 00:32:00.475 [2024-11-19 10:59:50.065925] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:00.475 [2024-11-19 10:59:50.065947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9500 (9): Bad file descriptor 00:32:00.475 [2024-11-19 10:59:50.066896] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:32:00.475 [2024-11-19 10:59:50.066971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:00.475 [2024-11-19 10:59:50.066997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.475 [2024-11-19 10:59:50.067013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:32:00.475 [2024-11-19 10:59:50.067020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:32:00.475 [2024-11-19 10:59:50.067027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:00.475 [2024-11-19 10:59:50.067034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10d9500 00:32:00.475 [2024-11-19 10:59:50.067053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9500 (9): Bad file descriptor 00:32:00.475 [2024-11-19 10:59:50.067064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:00.475 [2024-11-19 10:59:50.067071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:00.475 [2024-11-19 10:59:50.067080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:00.475 [2024-11-19 10:59:50.067088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:00.475 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.475 10:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4115297 00:32:01.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4115297) - No such process 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:01.413 { 00:32:01.413 "params": { 00:32:01.413 "name": "Nvme$subsystem", 00:32:01.413 "trtype": "$TEST_TRANSPORT", 00:32:01.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:01.413 "adrfam": "ipv4", 00:32:01.413 "trsvcid": "$NVMF_PORT", 00:32:01.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:01.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:01.413 "hdgst": ${hdgst:-false}, 00:32:01.413 "ddgst": ${ddgst:-false} 00:32:01.413 }, 00:32:01.413 "method": "bdev_nvme_attach_controller" 00:32:01.413 } 00:32:01.413 EOF 00:32:01.413 )") 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:01.413 10:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:01.413 "params": { 00:32:01.413 "name": "Nvme0", 00:32:01.413 "trtype": "tcp", 00:32:01.413 "traddr": "10.0.0.2", 00:32:01.413 "adrfam": "ipv4", 00:32:01.413 "trsvcid": "4420", 00:32:01.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:01.413 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:01.413 "hdgst": false, 00:32:01.413 "ddgst": false 00:32:01.413 }, 00:32:01.413 "method": "bdev_nvme_attach_controller" 00:32:01.413 }' 00:32:01.413 [2024-11-19 10:59:51.131574] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:32:01.413 [2024-11-19 10:59:51.131624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4115539 ] 00:32:01.672 [2024-11-19 10:59:51.208330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.672 [2024-11-19 10:59:51.248164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.672 Running I/O for 1 seconds... 00:32:03.048 2001.00 IOPS, 125.06 MiB/s 00:32:03.048 Latency(us) 00:32:03.048 [2024-11-19T09:59:52.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.048 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:03.048 Verification LBA range: start 0x0 length 0x400 00:32:03.048 Nvme0n1 : 1.01 2044.79 127.80 0.00 0.00 30718.39 1591.59 26963.38 00:32:03.048 [2024-11-19T09:59:52.840Z] =================================================================================================================== 00:32:03.048 [2024-11-19T09:59:52.840Z] Total : 2044.79 127.80 0.00 0.00 30718.39 1591.59 26963.38 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:03.048 rmmod nvme_tcp 00:32:03.048 rmmod nvme_fabrics 00:32:03.048 rmmod nvme_keyring 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4115247 ']' 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4115247 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4115247 ']' 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4115247 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4115247 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4115247' 00:32:03.048 killing process with pid 4115247 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4115247 00:32:03.048 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4115247 00:32:03.308 [2024-11-19 10:59:52.849647] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.308 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.213 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.213 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:05.213 00:32:05.213 real 0m11.893s 00:32:05.213 user 0m16.015s 00:32:05.213 sys 0m6.106s 00:32:05.213 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.213 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.213 ************************************ 00:32:05.213 END TEST nvmf_host_management 00:32:05.213 ************************************ 00:32:05.213 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:05.213 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:05.213 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.213 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:05.472 ************************************ 00:32:05.472 START TEST nvmf_lvol 00:32:05.472 ************************************ 00:32:05.472 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:05.472 * Looking for test storage... 00:32:05.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:05.472 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:05.472 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:05.472 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:05.472 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:05.472 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:05.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.473 --rc genhtml_branch_coverage=1 00:32:05.473 --rc genhtml_function_coverage=1 00:32:05.473 --rc genhtml_legend=1 00:32:05.473 --rc geninfo_all_blocks=1 00:32:05.473 --rc geninfo_unexecuted_blocks=1 00:32:05.473 00:32:05.473 ' 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:05.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.473 --rc genhtml_branch_coverage=1 00:32:05.473 --rc genhtml_function_coverage=1 00:32:05.473 --rc genhtml_legend=1 00:32:05.473 --rc geninfo_all_blocks=1 00:32:05.473 --rc geninfo_unexecuted_blocks=1 00:32:05.473 00:32:05.473 ' 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:05.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.473 --rc genhtml_branch_coverage=1 00:32:05.473 --rc genhtml_function_coverage=1 00:32:05.473 --rc genhtml_legend=1 00:32:05.473 --rc geninfo_all_blocks=1 00:32:05.473 --rc geninfo_unexecuted_blocks=1 00:32:05.473 00:32:05.473 ' 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:05.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.473 --rc genhtml_branch_coverage=1 00:32:05.473 --rc genhtml_function_coverage=1 00:32:05.473 --rc genhtml_legend=1 00:32:05.473 --rc geninfo_all_blocks=1 00:32:05.473 --rc geninfo_unexecuted_blocks=1 00:32:05.473 00:32:05.473 ' 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.473 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.474 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:12.047 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.047 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:12.048 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:12.048 Found net devices under 0000:86:00.0: cvl_0_0 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:12.048 Found net devices under 0000:86:00.1: cvl_0_1 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:12.048 11:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:12.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:32:12.048 00:32:12.048 --- 10.0.0.2 ping statistics --- 00:32:12.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.048 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:32:12.048 00:32:12.048 --- 10.0.0.1 ping statistics --- 00:32:12.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.048 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4119356 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4119356 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4119356 ']' 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:12.048 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:12.048 [2024-11-19 11:00:01.189467] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:12.048 [2024-11-19 11:00:01.190354] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:32:12.048 [2024-11-19 11:00:01.190387] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.048 [2024-11-19 11:00:01.268125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:12.048 [2024-11-19 11:00:01.309571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.048 [2024-11-19 11:00:01.309609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.048 [2024-11-19 11:00:01.309616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.048 [2024-11-19 11:00:01.309622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.049 [2024-11-19 11:00:01.309627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.049 [2024-11-19 11:00:01.311009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.049 [2024-11-19 11:00:01.311117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.049 [2024-11-19 11:00:01.311118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:12.049 [2024-11-19 11:00:01.377167] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:12.049 [2024-11-19 11:00:01.378022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:12.049 [2024-11-19 11:00:01.378175] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:12.049 [2024-11-19 11:00:01.378351] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:12.049 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.049 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:12.049 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:12.049 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:12.049 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:12.049 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.049 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:12.049 [2024-11-19 11:00:01.631887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.049 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:12.308 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:12.308 11:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:12.567 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:12.567 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:12.567 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:12.826 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0e2c7563-c92a-4a24-ac5c-e640952273e4 00:32:12.826 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0e2c7563-c92a-4a24-ac5c-e640952273e4 lvol 20 00:32:13.083 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=caa902a3-d45e-48f9-b9ce-112485082cb7 00:32:13.083 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:13.342 11:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 caa902a3-d45e-48f9-b9ce-112485082cb7 00:32:13.342 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:13.601 [2024-11-19 11:00:03.283845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.601 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:13.860 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4119901 00:32:13.860 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:13.860 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:14.795 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot caa902a3-d45e-48f9-b9ce-112485082cb7 MY_SNAPSHOT 00:32:15.054 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6037142e-12af-464f-b716-08a322eddf60 00:32:15.054 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize caa902a3-d45e-48f9-b9ce-112485082cb7 30 00:32:15.312 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6037142e-12af-464f-b716-08a322eddf60 MY_CLONE 00:32:15.571 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a97f07c0-e99b-4160-a63c-3c9b6d42c860 00:32:15.571 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a97f07c0-e99b-4160-a63c-3c9b6d42c860 00:32:16.138 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4119901 00:32:24.250 Initializing NVMe Controllers 00:32:24.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:24.250 Controller IO queue size 128, less than required. 00:32:24.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:24.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:24.250 Initialization complete. Launching workers. 00:32:24.250 ======================================================== 00:32:24.250 Latency(us) 00:32:24.250 Device Information : IOPS MiB/s Average min max 00:32:24.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12413.30 48.49 10315.12 3730.59 59283.21 00:32:24.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12593.00 49.19 10168.20 2144.37 56240.34 00:32:24.250 ======================================================== 00:32:24.250 Total : 25006.30 97.68 10241.13 2144.37 59283.21 00:32:24.250 00:32:24.250 11:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:24.508 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete caa902a3-d45e-48f9-b9ce-112485082cb7 00:32:24.767 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e2c7563-c92a-4a24-ac5c-e640952273e4 00:32:24.767 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:24.767 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:24.767 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:24.767 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:24.767 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:24.767 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:24.767 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:24.767 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:24.767 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:24.767 rmmod nvme_tcp 00:32:24.767 rmmod nvme_fabrics 00:32:25.027 rmmod nvme_keyring 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4119356 ']' 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4119356 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4119356 ']' 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4119356 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4119356 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4119356' 00:32:25.027 killing process with pid 4119356 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4119356 00:32:25.027 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4119356 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.286 11:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.192 11:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:27.192 00:32:27.192 real 0m21.911s 00:32:27.192 user 0m55.665s 00:32:27.192 sys 0m9.966s 00:32:27.192 11:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.192 11:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:27.192 ************************************ 00:32:27.192 END TEST nvmf_lvol 00:32:27.192 ************************************ 00:32:27.192 11:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:27.192 11:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:27.192 11:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.192 11:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:27.452 ************************************ 00:32:27.452 START TEST nvmf_lvs_grow 00:32:27.452 ************************************ 00:32:27.452 11:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:27.452 * Looking for test storage... 00:32:27.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.452 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:27.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.453 --rc genhtml_branch_coverage=1 00:32:27.453 --rc genhtml_function_coverage=1 00:32:27.453 --rc genhtml_legend=1 00:32:27.453 --rc geninfo_all_blocks=1 00:32:27.453 --rc geninfo_unexecuted_blocks=1 00:32:27.453 00:32:27.453 ' 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:27.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.453 --rc genhtml_branch_coverage=1 00:32:27.453 --rc genhtml_function_coverage=1 00:32:27.453 --rc genhtml_legend=1 00:32:27.453 --rc geninfo_all_blocks=1 00:32:27.453 --rc geninfo_unexecuted_blocks=1 00:32:27.453 00:32:27.453 ' 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:27.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.453 --rc genhtml_branch_coverage=1 00:32:27.453 --rc genhtml_function_coverage=1 00:32:27.453 --rc genhtml_legend=1 00:32:27.453 --rc geninfo_all_blocks=1 00:32:27.453 --rc geninfo_unexecuted_blocks=1 00:32:27.453 00:32:27.453 ' 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:27.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.453 --rc genhtml_branch_coverage=1 00:32:27.453 --rc genhtml_function_coverage=1 00:32:27.453 --rc genhtml_legend=1 00:32:27.453 --rc geninfo_all_blocks=1 00:32:27.453 --rc geninfo_unexecuted_blocks=1 00:32:27.453 00:32:27.453 ' 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.453 11:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:34.026 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:34.026 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:34.026 Found net devices under 0000:86:00.0: cvl_0_0 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:34.026 Found net devices under 0000:86:00.1: cvl_0_1 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.026 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:34.027 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.027 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.027 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.027 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:34.027 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:34.027 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.027 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.027 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.027 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.027 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:34.027 11:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:34.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:32:34.027 00:32:34.027 --- 10.0.0.2 ping statistics --- 00:32:34.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.027 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:32:34.027 00:32:34.027 --- 10.0.0.1 ping statistics --- 00:32:34.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.027 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4125431 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4125431 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 4125431 ']' 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.027 11:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:34.027 [2024-11-19 11:00:23.189195] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:34.027 [2024-11-19 11:00:23.190100] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:32:34.027 [2024-11-19 11:00:23.190132] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.027 [2024-11-19 11:00:23.289405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.027 [2024-11-19 11:00:23.332050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.027 [2024-11-19 11:00:23.332084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.027 [2024-11-19 11:00:23.332093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.027 [2024-11-19 11:00:23.332099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.027 [2024-11-19 11:00:23.332104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.027 [2024-11-19 11:00:23.332646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.027 [2024-11-19 11:00:23.398856] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:34.027 [2024-11-19 11:00:23.399069] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:34.316 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.316 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:34.316 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:34.316 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.316 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:34.316 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.316 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:34.576 [2024-11-19 11:00:24.225294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:34.576 ************************************ 00:32:34.576 START TEST lvs_grow_clean 00:32:34.576 ************************************ 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:34.576 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:34.835 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:34.835 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:35.094 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:35.094 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:35.094 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:35.352 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:35.352 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:35.352 11:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f lvol 150 00:32:35.352 11:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=43959606-b65f-4630-8a5a-532f341965ee 00:32:35.352 11:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:35.352 11:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:35.611 [2024-11-19 11:00:25.233025] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:35.611 [2024-11-19 11:00:25.233148] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:35.611 true 00:32:35.611 11:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:35.611 11:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:35.869 11:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:35.869 11:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:35.869 11:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 43959606-b65f-4630-8a5a-532f341965ee 00:32:36.129 11:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:36.389 [2024-11-19 11:00:25.953479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.389 11:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:36.389 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4125975 00:32:36.389 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:36.389 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:36.389 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4125975 /var/tmp/bdevperf.sock 00:32:36.389 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 4125975 ']' 00:32:36.389 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:36.389 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:36.389 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:36.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:36.389 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:36.389 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:36.648 [2024-11-19 11:00:26.191876] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:32:36.648 [2024-11-19 11:00:26.191923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4125975 ] 00:32:36.648 [2024-11-19 11:00:26.265291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.648 [2024-11-19 11:00:26.307582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.648 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.648 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:36.648 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:37.217 Nvme0n1 00:32:37.217 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:37.217 [ 00:32:37.217 { 00:32:37.217 "name": "Nvme0n1", 00:32:37.217 "aliases": [ 00:32:37.217 "43959606-b65f-4630-8a5a-532f341965ee" 00:32:37.217 ], 00:32:37.217 "product_name": "NVMe disk", 00:32:37.217 "block_size": 4096, 00:32:37.217 "num_blocks": 38912, 00:32:37.217 "uuid": "43959606-b65f-4630-8a5a-532f341965ee", 00:32:37.217 "numa_id": 1, 00:32:37.217 "assigned_rate_limits": { 00:32:37.217 "rw_ios_per_sec": 0, 00:32:37.217 "rw_mbytes_per_sec": 0, 00:32:37.217 "r_mbytes_per_sec": 0, 00:32:37.217 "w_mbytes_per_sec": 0 00:32:37.217 }, 00:32:37.217 "claimed": false, 00:32:37.217 "zoned": false, 00:32:37.217 "supported_io_types": { 00:32:37.217 "read": true, 00:32:37.217 "write": true, 00:32:37.217 "unmap": true, 00:32:37.217 "flush": true, 00:32:37.217 "reset": true, 00:32:37.217 "nvme_admin": true, 00:32:37.217 "nvme_io": true, 00:32:37.217 "nvme_io_md": false, 00:32:37.217 "write_zeroes": true, 00:32:37.217 "zcopy": false, 00:32:37.217 "get_zone_info": false, 00:32:37.217 "zone_management": false, 00:32:37.217 "zone_append": false, 00:32:37.217 "compare": true, 00:32:37.217 "compare_and_write": true, 00:32:37.217 "abort": true, 00:32:37.217 "seek_hole": false, 00:32:37.217 "seek_data": false, 00:32:37.217 "copy": true, 00:32:37.217 "nvme_iov_md": false 00:32:37.217 }, 00:32:37.217 "memory_domains": [ 00:32:37.217 { 00:32:37.217 "dma_device_id": "system", 00:32:37.217 "dma_device_type": 1 00:32:37.217 } 00:32:37.217 ], 00:32:37.217 "driver_specific": { 00:32:37.217 "nvme": [ 00:32:37.217 { 00:32:37.217 "trid": { 00:32:37.217 "trtype": "TCP", 00:32:37.217 "adrfam": "IPv4", 00:32:37.217 "traddr": "10.0.0.2", 00:32:37.217 "trsvcid": "4420", 00:32:37.217 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:37.217 }, 00:32:37.217 "ctrlr_data": { 00:32:37.217 "cntlid": 1, 00:32:37.217 "vendor_id": "0x8086", 00:32:37.217 "model_number": "SPDK bdev Controller", 00:32:37.218 "serial_number": "SPDK0", 00:32:37.218 "firmware_revision": "25.01", 00:32:37.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:37.218 "oacs": { 00:32:37.218 "security": 0, 00:32:37.218 "format": 0, 00:32:37.218 "firmware": 0, 00:32:37.218 "ns_manage": 0 00:32:37.218 }, 00:32:37.218 "multi_ctrlr": true, 00:32:37.218 "ana_reporting": false 00:32:37.218 }, 00:32:37.218 "vs": { 00:32:37.218 "nvme_version": "1.3" 00:32:37.218 }, 00:32:37.218 "ns_data": { 00:32:37.218 "id": 1, 00:32:37.218 "can_share": true 00:32:37.218 } 00:32:37.218 } 00:32:37.218 ], 00:32:37.218 "mp_policy": "active_passive" 00:32:37.218 } 00:32:37.218 } 00:32:37.218 ] 00:32:37.218 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4126156 00:32:37.218 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:37.218 11:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:37.478 Running I/O for 10 seconds... 00:32:38.415 Latency(us) 00:32:38.415 [2024-11-19T10:00:28.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.415 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:32:38.415 [2024-11-19T10:00:28.207Z] =================================================================================================================== 00:32:38.415 [2024-11-19T10:00:28.207Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:32:38.415 00:32:39.347 11:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:39.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.347 Nvme0n1 : 2.00 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:32:39.347 [2024-11-19T10:00:29.139Z] =================================================================================================================== 00:32:39.347 [2024-11-19T10:00:29.139Z] Total : 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:32:39.347 00:32:39.347 true 00:32:39.607 11:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:39.607 11:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:39.607 11:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:39.607 11:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:39.607 11:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4126156 00:32:40.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.543 Nvme0n1 : 3.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:32:40.543 [2024-11-19T10:00:30.335Z] =================================================================================================================== 00:32:40.543 [2024-11-19T10:00:30.335Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:32:40.543 00:32:41.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.479 Nvme0n1 : 4.00 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:32:41.479 [2024-11-19T10:00:31.271Z] =================================================================================================================== 00:32:41.479 [2024-11-19T10:00:31.271Z] Total : 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:32:41.479 00:32:42.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.417 Nvme0n1 : 5.00 23266.40 90.88 0.00 0.00 0.00 0.00 0.00 00:32:42.417 [2024-11-19T10:00:32.209Z] =================================================================================================================== 00:32:42.417 [2024-11-19T10:00:32.209Z] Total : 23266.40 90.88 0.00 0.00 0.00 0.00 0.00 00:32:42.417 00:32:43.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.354 Nvme0n1 : 6.00 23325.67 91.12 0.00 0.00 0.00 0.00 0.00 00:32:43.354 [2024-11-19T10:00:33.146Z] =================================================================================================================== 00:32:43.354 [2024-11-19T10:00:33.146Z] Total : 23325.67 91.12 0.00 0.00 0.00 0.00 0.00 00:32:43.354 00:32:44.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.291 Nvme0n1 : 7.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:32:44.291 [2024-11-19T10:00:34.083Z] =================================================================================================================== 00:32:44.291 [2024-11-19T10:00:34.083Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:32:44.291 00:32:45.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.670 Nvme0n1 : 8.00 23399.75 91.41 0.00 0.00 0.00 0.00 0.00 00:32:45.670 [2024-11-19T10:00:35.462Z] =================================================================================================================== 00:32:45.670 [2024-11-19T10:00:35.462Z] Total : 23399.75 91.41 0.00 0.00 0.00 0.00 0.00 00:32:45.670 00:32:46.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:46.608 Nvme0n1 : 9.00 23424.44 91.50 0.00 0.00 0.00 0.00 0.00 00:32:46.608 [2024-11-19T10:00:36.400Z] =================================================================================================================== 00:32:46.608 [2024-11-19T10:00:36.400Z] Total : 23424.44 91.50 0.00 0.00 0.00 0.00 0.00 00:32:46.608 00:32:47.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.545 Nvme0n1 : 10.00 23444.20 91.58 0.00 0.00 0.00 0.00 0.00 00:32:47.545 [2024-11-19T10:00:37.337Z] =================================================================================================================== 00:32:47.545 [2024-11-19T10:00:37.337Z] Total : 23444.20 91.58 0.00 0.00 0.00 0.00 0.00 00:32:47.545 00:32:47.545 00:32:47.545 Latency(us) 00:32:47.545 [2024-11-19T10:00:37.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.545 Nvme0n1 : 10.00 23448.81 91.60 0.00 0.00 5455.81 4805.97 25465.42 00:32:47.545 [2024-11-19T10:00:37.337Z] =================================================================================================================== 00:32:47.545 [2024-11-19T10:00:37.337Z] Total : 23448.81 91.60 0.00 0.00 5455.81 4805.97 25465.42 00:32:47.545 { 00:32:47.545 "results": [ 00:32:47.545 { 00:32:47.545 "job": "Nvme0n1", 00:32:47.545 "core_mask": "0x2", 00:32:47.545 "workload": "randwrite", 00:32:47.545 "status": "finished", 00:32:47.545 "queue_depth": 128, 00:32:47.545 "io_size": 4096, 00:32:47.545 "runtime": 10.003492, 00:32:47.545 "iops": 23448.811674963104, 00:32:47.545 "mibps": 91.59692060532463, 00:32:47.545 "io_failed": 0, 00:32:47.545 "io_timeout": 0, 00:32:47.545 "avg_latency_us": 5455.809703169122, 00:32:47.545 "min_latency_us": 4805.973333333333, 00:32:47.545 "max_latency_us": 25465.417142857143 00:32:47.545 } 00:32:47.545 ], 00:32:47.545 "core_count": 1 00:32:47.545 } 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4125975 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 4125975 ']' 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 4125975 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4125975 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4125975' 00:32:47.545 killing process with pid 4125975 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 4125975 00:32:47.545 Received shutdown signal, test time was about 10.000000 seconds 00:32:47.545 00:32:47.545 Latency(us) 00:32:47.545 [2024-11-19T10:00:37.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.545 [2024-11-19T10:00:37.337Z] =================================================================================================================== 00:32:47.545 [2024-11-19T10:00:37.337Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 4125975 00:32:47.545 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:47.822 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:48.080 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:48.080 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:48.339 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:48.339 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:48.339 11:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:48.339 [2024-11-19 11:00:38.057092] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:48.339 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:48.599 request: 00:32:48.599 { 00:32:48.599 "uuid": "7ff62b19-5cf4-4cc4-8f23-34d8877d035f", 00:32:48.599 "method": "bdev_lvol_get_lvstores", 00:32:48.599 "req_id": 1 00:32:48.599 } 00:32:48.599 Got JSON-RPC error response 00:32:48.599 response: 00:32:48.599 { 00:32:48.599 "code": -19, 00:32:48.599 "message": "No such device" 00:32:48.599 } 00:32:48.599 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:48.599 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:48.599 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:48.599 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:48.599 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:48.859 aio_bdev 00:32:48.859 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 43959606-b65f-4630-8a5a-532f341965ee 00:32:48.859 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=43959606-b65f-4630-8a5a-532f341965ee 00:32:48.859 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:48.859 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:48.859 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:48.859 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:48.859 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:49.118 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 43959606-b65f-4630-8a5a-532f341965ee -t 2000 00:32:49.118 [ 00:32:49.118 { 00:32:49.118 "name": "43959606-b65f-4630-8a5a-532f341965ee", 00:32:49.118 "aliases": [ 00:32:49.118 "lvs/lvol" 00:32:49.118 ], 00:32:49.118 "product_name": "Logical Volume", 00:32:49.118 "block_size": 4096, 00:32:49.118 "num_blocks": 38912, 00:32:49.118 "uuid": "43959606-b65f-4630-8a5a-532f341965ee", 00:32:49.118 "assigned_rate_limits": { 00:32:49.118 "rw_ios_per_sec": 0, 00:32:49.118 "rw_mbytes_per_sec": 0, 00:32:49.118 "r_mbytes_per_sec": 0, 00:32:49.118 "w_mbytes_per_sec": 0 00:32:49.118 }, 00:32:49.118 "claimed": false, 00:32:49.118 "zoned": false, 00:32:49.118 "supported_io_types": { 00:32:49.118 "read": true, 00:32:49.118 "write": true, 00:32:49.118 "unmap": true, 00:32:49.118 "flush": false, 00:32:49.118 "reset": true, 00:32:49.118 "nvme_admin": false, 00:32:49.118 "nvme_io": false, 00:32:49.118 "nvme_io_md": false, 00:32:49.118 "write_zeroes": true, 00:32:49.118 "zcopy": false, 00:32:49.118 "get_zone_info": false, 00:32:49.118 "zone_management": false, 00:32:49.118 "zone_append": false, 00:32:49.118 "compare": false, 00:32:49.118 "compare_and_write": false, 00:32:49.118 "abort": false, 00:32:49.118 "seek_hole": true, 00:32:49.118 "seek_data": true, 00:32:49.118 "copy": false, 00:32:49.118 "nvme_iov_md": false 00:32:49.118 }, 00:32:49.118 "driver_specific": { 00:32:49.118 "lvol": { 00:32:49.118 "lvol_store_uuid": "7ff62b19-5cf4-4cc4-8f23-34d8877d035f", 00:32:49.118 "base_bdev": "aio_bdev", 00:32:49.118 "thin_provision": false, 00:32:49.118 "num_allocated_clusters": 38, 00:32:49.118 "snapshot": false, 00:32:49.118 "clone": false, 00:32:49.118 "esnap_clone": false 00:32:49.118 } 00:32:49.118 } 00:32:49.118 } 00:32:49.118 ] 00:32:49.118 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:49.118 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:49.118 11:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:49.377 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:49.377 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:49.377 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:49.635 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:49.635 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 43959606-b65f-4630-8a5a-532f341965ee 00:32:49.895 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7ff62b19-5cf4-4cc4-8f23-34d8877d035f 00:32:49.895 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:50.155 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:50.155 00:32:50.155 real 0m15.615s 00:32:50.155 user 0m15.096s 00:32:50.155 sys 0m1.489s 00:32:50.155 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.155 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:50.155 ************************************ 00:32:50.155 END TEST lvs_grow_clean 00:32:50.155 ************************************ 00:32:50.155 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:50.155 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:50.155 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.155 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:50.415 ************************************ 00:32:50.415 START TEST lvs_grow_dirty 00:32:50.415 ************************************ 00:32:50.415 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:50.415 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:50.415 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:50.415 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:50.415 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:50.415 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:50.415 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:50.415 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:50.415 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:50.415 11:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:50.415 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:50.674 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:50.674 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:32:50.674 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:32:50.675 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:50.934 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:50.934 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:50.934 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 lvol 150 00:32:51.192 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8505d759-e5e5-4be9-a550-355d40f02725 00:32:51.192 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:51.192 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:51.192 [2024-11-19 11:00:40.973026] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:51.192 [2024-11-19 11:00:40.973161] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:51.192 true 00:32:51.451 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:32:51.451 11:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:51.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:51.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:51.710 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8505d759-e5e5-4be9-a550-355d40f02725 00:32:51.968 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:51.968 [2024-11-19 11:00:41.733512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.968 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:52.227 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4128520 00:32:52.227 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:52.227 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:52.228 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4128520 /var/tmp/bdevperf.sock 00:32:52.228 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4128520 ']' 00:32:52.228 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:52.228 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.228 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:52.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:52.228 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.228 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:52.228 [2024-11-19 11:00:41.966737] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:32:52.228 [2024-11-19 11:00:41.966786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128520 ] 00:32:52.485 [2024-11-19 11:00:42.042377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.485 [2024-11-19 11:00:42.084644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.485 11:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.486 11:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:52.486 11:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:53.052 Nvme0n1 00:32:53.052 11:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:53.052 [ 00:32:53.052 { 00:32:53.052 "name": "Nvme0n1", 00:32:53.052 "aliases": [ 00:32:53.052 "8505d759-e5e5-4be9-a550-355d40f02725" 00:32:53.052 ], 00:32:53.052 "product_name": "NVMe disk", 00:32:53.052 "block_size": 4096, 00:32:53.052 "num_blocks": 38912, 00:32:53.052 "uuid": "8505d759-e5e5-4be9-a550-355d40f02725", 00:32:53.052 "numa_id": 1, 00:32:53.052 "assigned_rate_limits": { 00:32:53.052 "rw_ios_per_sec": 0, 00:32:53.052 "rw_mbytes_per_sec": 0, 00:32:53.052 "r_mbytes_per_sec": 0, 00:32:53.052 "w_mbytes_per_sec": 0 00:32:53.052 }, 00:32:53.052 "claimed": false, 00:32:53.052 "zoned": false, 00:32:53.052 "supported_io_types": { 00:32:53.052 "read": true, 00:32:53.052 "write": true, 00:32:53.052 "unmap": true, 00:32:53.052 "flush": true, 00:32:53.052 "reset": true, 00:32:53.052 "nvme_admin": true, 00:32:53.052 "nvme_io": true, 00:32:53.052 "nvme_io_md": false, 00:32:53.052 "write_zeroes": true, 00:32:53.052 "zcopy": false, 00:32:53.052 "get_zone_info": false, 00:32:53.052 "zone_management": false, 00:32:53.052 "zone_append": false, 00:32:53.052 "compare": true, 00:32:53.052 "compare_and_write": true, 00:32:53.052 "abort": true, 00:32:53.052 "seek_hole": false, 00:32:53.052 "seek_data": false, 00:32:53.052 "copy": true, 00:32:53.052 "nvme_iov_md": false 00:32:53.052 }, 00:32:53.052 "memory_domains": [ 00:32:53.052 { 00:32:53.052 "dma_device_id": "system", 00:32:53.052 "dma_device_type": 1 00:32:53.052 } 00:32:53.052 ], 00:32:53.052 "driver_specific": { 00:32:53.052 "nvme": [ 00:32:53.052 { 00:32:53.052 "trid": { 00:32:53.052 "trtype": "TCP", 00:32:53.052 "adrfam": "IPv4", 00:32:53.052 "traddr": "10.0.0.2", 00:32:53.052 "trsvcid": "4420", 00:32:53.052 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:53.052 }, 00:32:53.052 "ctrlr_data": { 00:32:53.052 "cntlid": 1, 00:32:53.052 "vendor_id": "0x8086", 00:32:53.052 "model_number": "SPDK bdev Controller", 00:32:53.052 "serial_number": "SPDK0", 00:32:53.052 "firmware_revision": "25.01", 00:32:53.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:53.052 "oacs": { 00:32:53.052 "security": 0, 00:32:53.052 "format": 0, 00:32:53.052 "firmware": 0, 00:32:53.052 "ns_manage": 0 00:32:53.052 }, 00:32:53.052 "multi_ctrlr": true, 00:32:53.052 "ana_reporting": false 00:32:53.052 }, 00:32:53.052 "vs": { 00:32:53.052 "nvme_version": "1.3" 00:32:53.052 }, 00:32:53.052 "ns_data": { 00:32:53.052 "id": 1, 00:32:53.052 "can_share": true 00:32:53.052 } 00:32:53.052 } 00:32:53.052 ], 00:32:53.052 "mp_policy": "active_passive" 00:32:53.052 } 00:32:53.052 } 00:32:53.052 ] 00:32:53.052 11:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4128739 00:32:53.052 11:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:53.052 11:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:53.311 Running I/O for 10 seconds... 00:32:54.245 Latency(us) 00:32:54.245 [2024-11-19T10:00:44.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:54.245 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:32:54.245 [2024-11-19T10:00:44.037Z] =================================================================================================================== 00:32:54.245 [2024-11-19T10:00:44.037Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:32:54.245 00:32:55.179 11:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:32:55.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:55.179 Nvme0n1 : 2.00 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:32:55.179 [2024-11-19T10:00:44.971Z] =================================================================================================================== 00:32:55.179 [2024-11-19T10:00:44.971Z] Total : 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:32:55.179 00:32:55.437 true 00:32:55.437 11:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:32:55.437 11:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:55.437 11:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:55.437 11:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:55.437 11:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4128739 00:32:56.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.373 Nvme0n1 : 3.00 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:32:56.373 [2024-11-19T10:00:46.165Z] =================================================================================================================== 00:32:56.373 [2024-11-19T10:00:46.165Z] Total : 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:32:56.373 00:32:57.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:57.308 Nvme0n1 : 4.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:32:57.308 [2024-11-19T10:00:47.100Z] =================================================================================================================== 00:32:57.308 [2024-11-19T10:00:47.100Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:32:57.308 00:32:58.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:58.243 Nvme0n1 : 5.00 23317.20 91.08 0.00 0.00 0.00 0.00 0.00 00:32:58.243 [2024-11-19T10:00:48.035Z] =================================================================================================================== 00:32:58.243 [2024-11-19T10:00:48.035Z] Total : 23317.20 91.08 0.00 0.00 0.00 0.00 0.00 00:32:58.243 00:32:59.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.177 Nvme0n1 : 6.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:32:59.177 [2024-11-19T10:00:48.969Z] =================================================================================================================== 00:32:59.177 [2024-11-19T10:00:48.969Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:32:59.177 00:33:00.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.112 Nvme0n1 : 7.00 23404.29 91.42 0.00 0.00 0.00 0.00 0.00 00:33:00.112 [2024-11-19T10:00:49.904Z] =================================================================================================================== 00:33:00.112 [2024-11-19T10:00:49.904Z] Total : 23404.29 91.42 0.00 0.00 0.00 0.00 0.00 00:33:00.112 00:33:01.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:01.489 Nvme0n1 : 8.00 23372.25 91.30 0.00 0.00 0.00 0.00 0.00 00:33:01.489 [2024-11-19T10:00:51.281Z] =================================================================================================================== 00:33:01.489 [2024-11-19T10:00:51.281Z] Total : 23372.25 91.30 0.00 0.00 0.00 0.00 0.00 00:33:01.489 00:33:02.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:02.424 Nvme0n1 : 9.00 23400.00 91.41 0.00 0.00 0.00 0.00 0.00 00:33:02.424 [2024-11-19T10:00:52.216Z] =================================================================================================================== 00:33:02.424 [2024-11-19T10:00:52.216Z] Total : 23400.00 91.41 0.00 0.00 0.00 0.00 0.00 00:33:02.424 00:33:03.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.360 Nvme0n1 : 10.00 23422.20 91.49 0.00 0.00 0.00 0.00 0.00 00:33:03.360 [2024-11-19T10:00:53.152Z] =================================================================================================================== 00:33:03.360 [2024-11-19T10:00:53.152Z] Total : 23422.20 91.49 0.00 0.00 0.00 0.00 0.00 00:33:03.360 00:33:03.360 00:33:03.360 Latency(us) 00:33:03.360 [2024-11-19T10:00:53.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.360 Nvme0n1 : 10.01 23420.84 91.49 0.00 0.00 5462.41 2715.06 27088.21 00:33:03.360 [2024-11-19T10:00:53.152Z] =================================================================================================================== 00:33:03.360 [2024-11-19T10:00:53.152Z] Total : 23420.84 91.49 0.00 0.00 5462.41 2715.06 27088.21 00:33:03.360 { 00:33:03.360 "results": [ 00:33:03.360 { 00:33:03.360 "job": "Nvme0n1", 00:33:03.360 "core_mask": "0x2", 00:33:03.360 "workload": "randwrite", 00:33:03.360 "status": "finished", 00:33:03.360 "queue_depth": 128, 00:33:03.360 "io_size": 4096, 00:33:03.360 "runtime": 10.006045, 00:33:03.360 "iops": 23420.842100949976, 00:33:03.360 "mibps": 91.48766445683584, 00:33:03.360 "io_failed": 0, 00:33:03.360 "io_timeout": 0, 00:33:03.360 "avg_latency_us": 5462.412882578968, 00:33:03.360 "min_latency_us": 2715.062857142857, 00:33:03.360 "max_latency_us": 27088.213333333333 00:33:03.360 } 00:33:03.360 ], 00:33:03.360 "core_count": 1 00:33:03.360 } 00:33:03.360 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4128520 00:33:03.360 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 4128520 ']' 00:33:03.360 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 4128520 00:33:03.360 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:03.360 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.360 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4128520 00:33:03.360 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:03.360 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:03.360 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4128520' 00:33:03.360 killing process with pid 4128520 00:33:03.360 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 4128520 00:33:03.360 Received shutdown signal, test time was about 10.000000 seconds 00:33:03.360 00:33:03.360 Latency(us) 00:33:03.360 [2024-11-19T10:00:53.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.360 [2024-11-19T10:00:53.152Z] =================================================================================================================== 00:33:03.360 [2024-11-19T10:00:53.152Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:03.360 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 4128520 00:33:03.360 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:03.619 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:03.879 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:33:03.879 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4125431 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4125431 00:33:04.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4125431 Killed "${NVMF_APP[@]}" "$@" 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4130499 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4130499 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4130499 ']' 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:04.138 11:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:04.138 [2024-11-19 11:00:53.796768] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:04.138 [2024-11-19 11:00:53.797671] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:04.138 [2024-11-19 11:00:53.797705] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.138 [2024-11-19 11:00:53.877653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.138 [2024-11-19 11:00:53.918158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.138 [2024-11-19 11:00:53.918193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.138 [2024-11-19 11:00:53.918200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.138 [2024-11-19 11:00:53.918212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.138 [2024-11-19 11:00:53.918232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.138 [2024-11-19 11:00:53.918761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.397 [2024-11-19 11:00:53.985787] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:04.397 [2024-11-19 11:00:53.986016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:04.397 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.397 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:04.397 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:04.397 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:04.397 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:04.397 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.397 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:04.656 [2024-11-19 11:00:54.220181] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:04.656 [2024-11-19 11:00:54.220391] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:04.656 [2024-11-19 11:00:54.220472] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:04.656 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:04.656 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8505d759-e5e5-4be9-a550-355d40f02725 00:33:04.656 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8505d759-e5e5-4be9-a550-355d40f02725 00:33:04.656 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:04.656 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:04.656 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:04.656 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:04.656 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:04.915 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8505d759-e5e5-4be9-a550-355d40f02725 -t 2000 00:33:04.915 [ 00:33:04.915 { 00:33:04.915 "name": "8505d759-e5e5-4be9-a550-355d40f02725", 00:33:04.915 "aliases": [ 00:33:04.915 "lvs/lvol" 00:33:04.915 ], 00:33:04.915 "product_name": "Logical Volume", 00:33:04.915 "block_size": 4096, 00:33:04.915 "num_blocks": 38912, 00:33:04.915 "uuid": "8505d759-e5e5-4be9-a550-355d40f02725", 00:33:04.915 "assigned_rate_limits": { 00:33:04.915 "rw_ios_per_sec": 0, 00:33:04.915 "rw_mbytes_per_sec": 0, 00:33:04.915 "r_mbytes_per_sec": 0, 00:33:04.915 "w_mbytes_per_sec": 0 00:33:04.915 }, 00:33:04.915 "claimed": false, 00:33:04.915 "zoned": false, 00:33:04.915 "supported_io_types": { 00:33:04.915 "read": true, 00:33:04.915 "write": true, 00:33:04.915 "unmap": true, 00:33:04.915 "flush": false, 00:33:04.915 "reset": true, 00:33:04.915 "nvme_admin": false, 00:33:04.915 "nvme_io": false, 00:33:04.915 "nvme_io_md": false, 00:33:04.915 "write_zeroes": true, 00:33:04.915 "zcopy": false, 00:33:04.915 "get_zone_info": false, 00:33:04.915 "zone_management": false, 00:33:04.915 "zone_append": false, 00:33:04.915 "compare": false, 00:33:04.915 "compare_and_write": false, 00:33:04.915 "abort": false, 00:33:04.915 "seek_hole": true, 00:33:04.915 "seek_data": true, 00:33:04.915 "copy": false, 00:33:04.915 "nvme_iov_md": false 00:33:04.915 }, 00:33:04.915 "driver_specific": { 00:33:04.915 "lvol": { 00:33:04.915 "lvol_store_uuid": "829d6eff-a0ae-477a-adbc-f30bd1a1c187", 00:33:04.915 "base_bdev": "aio_bdev", 00:33:04.915 "thin_provision": false, 00:33:04.915 "num_allocated_clusters": 38, 00:33:04.915 "snapshot": false, 00:33:04.915 "clone": false, 00:33:04.916 "esnap_clone": false 00:33:04.916 } 00:33:04.916 } 00:33:04.916 } 00:33:04.916 ] 00:33:04.916 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:04.916 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:33:04.916 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:05.174 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:05.174 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:33:05.174 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:05.433 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:05.433 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:05.433 [2024-11-19 11:00:55.179244] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:33:05.693 request: 00:33:05.693 { 00:33:05.693 "uuid": "829d6eff-a0ae-477a-adbc-f30bd1a1c187", 00:33:05.693 "method": "bdev_lvol_get_lvstores", 00:33:05.693 "req_id": 1 00:33:05.693 } 00:33:05.693 Got JSON-RPC error response 00:33:05.693 response: 00:33:05.693 { 00:33:05.693 "code": -19, 00:33:05.693 "message": "No such device" 00:33:05.693 } 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:05.693 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:05.952 aio_bdev 00:33:05.952 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8505d759-e5e5-4be9-a550-355d40f02725 00:33:05.952 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8505d759-e5e5-4be9-a550-355d40f02725 00:33:05.952 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:05.952 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:05.952 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:05.952 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:05.952 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:06.211 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8505d759-e5e5-4be9-a550-355d40f02725 -t 2000 00:33:06.211 [ 00:33:06.211 { 00:33:06.211 "name": "8505d759-e5e5-4be9-a550-355d40f02725", 00:33:06.211 "aliases": [ 00:33:06.211 "lvs/lvol" 00:33:06.211 ], 00:33:06.211 "product_name": "Logical Volume", 00:33:06.211 "block_size": 4096, 00:33:06.211 "num_blocks": 38912, 00:33:06.211 "uuid": "8505d759-e5e5-4be9-a550-355d40f02725", 00:33:06.211 "assigned_rate_limits": { 00:33:06.211 "rw_ios_per_sec": 0, 00:33:06.211 "rw_mbytes_per_sec": 0, 00:33:06.211 "r_mbytes_per_sec": 0, 00:33:06.211 "w_mbytes_per_sec": 0 00:33:06.211 }, 00:33:06.211 "claimed": false, 00:33:06.211 "zoned": false, 00:33:06.211 "supported_io_types": { 00:33:06.211 "read": true, 00:33:06.211 "write": true, 00:33:06.211 "unmap": true, 00:33:06.211 "flush": false, 00:33:06.211 "reset": true, 00:33:06.211 "nvme_admin": false, 00:33:06.211 "nvme_io": false, 00:33:06.211 "nvme_io_md": false, 00:33:06.211 "write_zeroes": true, 00:33:06.211 "zcopy": false, 00:33:06.211 "get_zone_info": false, 00:33:06.211 "zone_management": false, 00:33:06.211 "zone_append": false, 00:33:06.211 "compare": false, 00:33:06.211 "compare_and_write": false, 00:33:06.211 "abort": false, 00:33:06.211 "seek_hole": true, 00:33:06.211 "seek_data": true, 00:33:06.211 "copy": false, 00:33:06.211 "nvme_iov_md": false 00:33:06.211 }, 00:33:06.211 "driver_specific": { 00:33:06.211 "lvol": { 00:33:06.211 "lvol_store_uuid": "829d6eff-a0ae-477a-adbc-f30bd1a1c187", 00:33:06.211 "base_bdev": "aio_bdev", 00:33:06.211 "thin_provision": false, 00:33:06.211 "num_allocated_clusters": 38, 00:33:06.211 "snapshot": false, 00:33:06.211 "clone": false, 00:33:06.211 "esnap_clone": false 00:33:06.211 } 00:33:06.211 } 00:33:06.211 } 00:33:06.211 ] 00:33:06.211 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:06.211 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:33:06.211 11:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:06.469 11:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:06.469 11:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:33:06.469 11:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:06.727 11:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:06.727 11:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8505d759-e5e5-4be9-a550-355d40f02725 00:33:06.986 11:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 829d6eff-a0ae-477a-adbc-f30bd1a1c187 00:33:07.245 11:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:07.245 11:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:07.245 00:33:07.245 real 0m17.062s 00:33:07.245 user 0m34.377s 00:33:07.245 sys 0m3.925s 00:33:07.245 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.245 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:07.245 ************************************ 00:33:07.245 END TEST lvs_grow_dirty 00:33:07.245 ************************************ 00:33:07.509 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:07.509 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:07.509 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:07.509 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:07.509 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:07.509 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:07.509 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:07.509 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:07.510 nvmf_trace.0 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.510 rmmod nvme_tcp 00:33:07.510 rmmod nvme_fabrics 00:33:07.510 rmmod nvme_keyring 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4130499 ']' 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4130499 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 4130499 ']' 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 4130499 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4130499 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4130499' 00:33:07.510 killing process with pid 4130499 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 4130499 00:33:07.510 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 4130499 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.794 11:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.733 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.733 00:33:09.733 real 0m42.468s 00:33:09.733 user 0m52.140s 00:33:09.733 sys 0m10.310s 00:33:09.733 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.733 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:09.733 ************************************ 00:33:09.733 END TEST nvmf_lvs_grow 00:33:09.733 ************************************ 00:33:09.733 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:09.733 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:09.733 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.733 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:09.993 ************************************ 00:33:09.993 START TEST nvmf_bdev_io_wait 00:33:09.993 ************************************ 00:33:09.993 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:09.993 * Looking for test storage... 00:33:09.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:09.993 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:09.993 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:09.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.994 --rc genhtml_branch_coverage=1 00:33:09.994 --rc genhtml_function_coverage=1 00:33:09.994 --rc genhtml_legend=1 00:33:09.994 --rc geninfo_all_blocks=1 00:33:09.994 --rc geninfo_unexecuted_blocks=1 00:33:09.994 00:33:09.994 ' 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:09.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.994 --rc genhtml_branch_coverage=1 00:33:09.994 --rc genhtml_function_coverage=1 00:33:09.994 --rc genhtml_legend=1 00:33:09.994 --rc geninfo_all_blocks=1 00:33:09.994 --rc geninfo_unexecuted_blocks=1 00:33:09.994 00:33:09.994 ' 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:09.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.994 --rc genhtml_branch_coverage=1 00:33:09.994 --rc genhtml_function_coverage=1 00:33:09.994 --rc genhtml_legend=1 00:33:09.994 --rc geninfo_all_blocks=1 00:33:09.994 --rc geninfo_unexecuted_blocks=1 00:33:09.994 00:33:09.994 ' 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:09.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.994 --rc genhtml_branch_coverage=1 00:33:09.994 --rc genhtml_function_coverage=1 00:33:09.994 --rc genhtml_legend=1 00:33:09.994 --rc geninfo_all_blocks=1 00:33:09.994 --rc geninfo_unexecuted_blocks=1 00:33:09.994 00:33:09.994 ' 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.994 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.995 11:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:16.570 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:16.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:16.571 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:16.571 Found net devices under 0000:86:00.0: cvl_0_0 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:16.571 Found net devices under 0000:86:00.1: cvl_0_1 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:16.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:33:16.571 00:33:16.571 --- 10.0.0.2 ping statistics --- 00:33:16.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.571 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:33:16.571 00:33:16.571 --- 10.0.0.1 ping statistics --- 00:33:16.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.571 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4134618 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4134618 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 4134618 ']' 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.571 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.572 [2024-11-19 11:01:05.744839] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:16.572 [2024-11-19 11:01:05.745754] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:16.572 [2024-11-19 11:01:05.745788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.572 [2024-11-19 11:01:05.810586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:16.572 [2024-11-19 11:01:05.856841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.572 [2024-11-19 11:01:05.856876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.572 [2024-11-19 11:01:05.856883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.572 [2024-11-19 11:01:05.856890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.572 [2024-11-19 11:01:05.856895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.572 [2024-11-19 11:01:05.861219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.572 [2024-11-19 11:01:05.861266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.572 [2024-11-19 11:01:05.861374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.572 [2024-11-19 11:01:05.861375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.572 [2024-11-19 11:01:05.861624] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.572 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.572 [2024-11-19 11:01:06.005584] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:16.572 [2024-11-19 11:01:06.005974] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:16.572 [2024-11-19 11:01:06.006100] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:16.572 [2024-11-19 11:01:06.006269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.572 [2024-11-19 11:01:06.018037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.572 Malloc0 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.572 [2024-11-19 11:01:06.090315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4134643 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4134645 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:16.572 { 00:33:16.572 "params": { 00:33:16.572 "name": "Nvme$subsystem", 00:33:16.572 "trtype": "$TEST_TRANSPORT", 00:33:16.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:16.572 "adrfam": "ipv4", 00:33:16.572 "trsvcid": "$NVMF_PORT", 00:33:16.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:16.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:16.572 "hdgst": ${hdgst:-false}, 00:33:16.572 "ddgst": ${ddgst:-false} 00:33:16.572 }, 00:33:16.572 "method": "bdev_nvme_attach_controller" 00:33:16.572 } 00:33:16.572 EOF 00:33:16.572 )") 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4134647 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:16.572 { 00:33:16.572 "params": { 00:33:16.572 "name": "Nvme$subsystem", 00:33:16.572 "trtype": "$TEST_TRANSPORT", 00:33:16.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:16.572 "adrfam": "ipv4", 00:33:16.572 "trsvcid": "$NVMF_PORT", 00:33:16.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:16.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:16.572 "hdgst": ${hdgst:-false}, 00:33:16.572 "ddgst": ${ddgst:-false} 00:33:16.572 }, 00:33:16.572 "method": "bdev_nvme_attach_controller" 00:33:16.572 } 00:33:16.572 EOF 00:33:16.572 )") 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4134650 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:16.572 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:16.572 { 00:33:16.573 "params": { 00:33:16.573 "name": "Nvme$subsystem", 00:33:16.573 "trtype": "$TEST_TRANSPORT", 00:33:16.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:16.573 "adrfam": "ipv4", 00:33:16.573 "trsvcid": "$NVMF_PORT", 00:33:16.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:16.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:16.573 "hdgst": ${hdgst:-false}, 00:33:16.573 "ddgst": ${ddgst:-false} 00:33:16.573 }, 00:33:16.573 "method": "bdev_nvme_attach_controller" 00:33:16.573 } 00:33:16.573 EOF 00:33:16.573 )") 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:16.573 { 00:33:16.573 "params": { 00:33:16.573 "name": "Nvme$subsystem", 00:33:16.573 "trtype": "$TEST_TRANSPORT", 00:33:16.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:16.573 "adrfam": "ipv4", 00:33:16.573 "trsvcid": "$NVMF_PORT", 00:33:16.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:16.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:16.573 "hdgst": ${hdgst:-false}, 00:33:16.573 "ddgst": ${ddgst:-false} 00:33:16.573 }, 00:33:16.573 "method": "bdev_nvme_attach_controller" 00:33:16.573 } 00:33:16.573 EOF 00:33:16.573 )") 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4134643 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:16.573 "params": { 00:33:16.573 "name": "Nvme1", 00:33:16.573 "trtype": "tcp", 00:33:16.573 "traddr": "10.0.0.2", 00:33:16.573 "adrfam": "ipv4", 00:33:16.573 "trsvcid": "4420", 00:33:16.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:16.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:16.573 "hdgst": false, 00:33:16.573 "ddgst": false 00:33:16.573 }, 00:33:16.573 "method": "bdev_nvme_attach_controller" 00:33:16.573 }' 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:16.573 "params": { 00:33:16.573 "name": "Nvme1", 00:33:16.573 "trtype": "tcp", 00:33:16.573 "traddr": "10.0.0.2", 00:33:16.573 "adrfam": "ipv4", 00:33:16.573 "trsvcid": "4420", 00:33:16.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:16.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:16.573 "hdgst": false, 00:33:16.573 "ddgst": false 00:33:16.573 }, 00:33:16.573 "method": "bdev_nvme_attach_controller" 00:33:16.573 }' 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:16.573 "params": { 00:33:16.573 "name": "Nvme1", 00:33:16.573 "trtype": "tcp", 00:33:16.573 "traddr": "10.0.0.2", 00:33:16.573 "adrfam": "ipv4", 00:33:16.573 "trsvcid": "4420", 00:33:16.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:16.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:16.573 "hdgst": false, 00:33:16.573 "ddgst": false 00:33:16.573 }, 00:33:16.573 "method": "bdev_nvme_attach_controller" 00:33:16.573 }' 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:16.573 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:16.573 "params": { 00:33:16.573 "name": "Nvme1", 00:33:16.573 "trtype": "tcp", 00:33:16.573 "traddr": "10.0.0.2", 00:33:16.573 "adrfam": "ipv4", 00:33:16.573 "trsvcid": "4420", 00:33:16.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:16.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:16.573 "hdgst": false, 00:33:16.573 "ddgst": false 00:33:16.573 }, 00:33:16.573 "method": "bdev_nvme_attach_controller" 00:33:16.573 }' 00:33:16.573 [2024-11-19 11:01:06.140851] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:16.573 [2024-11-19 11:01:06.140902] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:16.573 [2024-11-19 11:01:06.142923] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:16.573 [2024-11-19 11:01:06.142924] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:16.573 [2024-11-19 11:01:06.142971] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 11:01:06.142972] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:16.573 --proc-type=auto ] 00:33:16.573 [2024-11-19 11:01:06.143736] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:16.573 [2024-11-19 11:01:06.143775] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:16.573 [2024-11-19 11:01:06.327786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.831 [2024-11-19 11:01:06.370163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:16.831 [2024-11-19 11:01:06.416354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.831 [2024-11-19 11:01:06.462924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.831 [2024-11-19 11:01:06.470316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:16.831 [2024-11-19 11:01:06.503165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:16.831 [2024-11-19 11:01:06.522142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.831 [2024-11-19 11:01:06.564443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:17.089 Running I/O for 1 seconds... 00:33:17.089 Running I/O for 1 seconds... 00:33:17.089 Running I/O for 1 seconds... 00:33:17.089 Running I/O for 1 seconds... 00:33:18.024 252880.00 IOPS, 987.81 MiB/s 00:33:18.024 Latency(us) 00:33:18.024 [2024-11-19T10:01:07.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.024 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:18.024 Nvme1n1 : 1.00 252501.42 986.33 0.00 0.00 504.22 223.33 1490.16 00:33:18.024 [2024-11-19T10:01:07.816Z] =================================================================================================================== 00:33:18.024 [2024-11-19T10:01:07.816Z] Total : 252501.42 986.33 0.00 0.00 504.22 223.33 1490.16 00:33:18.024 8438.00 IOPS, 32.96 MiB/s 00:33:18.024 Latency(us) 00:33:18.024 [2024-11-19T10:01:07.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.024 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:18.024 Nvme1n1 : 1.02 8473.22 33.10 0.00 0.00 15046.02 3401.63 27462.70 00:33:18.024 [2024-11-19T10:01:07.816Z] =================================================================================================================== 00:33:18.024 [2024-11-19T10:01:07.816Z] Total : 8473.22 33.10 0.00 0.00 15046.02 3401.63 27462.70 00:33:18.024 11895.00 IOPS, 46.46 MiB/s 00:33:18.024 Latency(us) 00:33:18.024 [2024-11-19T10:01:07.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.024 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:18.024 Nvme1n1 : 1.01 11937.59 46.63 0.00 0.00 10680.01 4556.31 14667.58 00:33:18.024 [2024-11-19T10:01:07.816Z] =================================================================================================================== 00:33:18.024 [2024-11-19T10:01:07.816Z] Total : 11937.59 46.63 0.00 0.00 10680.01 4556.31 14667.58 00:33:18.024 8340.00 IOPS, 32.58 MiB/s 00:33:18.025 Latency(us) 00:33:18.025 [2024-11-19T10:01:07.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.025 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:18.025 Nvme1n1 : 1.01 8466.73 33.07 0.00 0.00 15084.40 3214.38 30957.96 00:33:18.025 [2024-11-19T10:01:07.817Z] =================================================================================================================== 00:33:18.025 [2024-11-19T10:01:07.817Z] Total : 8466.73 33.07 0.00 0.00 15084.40 3214.38 30957.96 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4134645 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4134647 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4134650 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:18.283 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:18.283 rmmod nvme_tcp 00:33:18.283 rmmod nvme_fabrics 00:33:18.283 rmmod nvme_keyring 00:33:18.284 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:18.284 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:18.284 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:18.284 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4134618 ']' 00:33:18.284 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4134618 00:33:18.284 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 4134618 ']' 00:33:18.284 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 4134618 00:33:18.284 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:18.284 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:18.284 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4134618 00:33:18.284 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:18.284 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:18.284 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4134618' 00:33:18.284 killing process with pid 4134618 00:33:18.284 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 4134618 00:33:18.284 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 4134618 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:18.543 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.446 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:20.704 00:33:20.704 real 0m10.704s 00:33:20.704 user 0m14.556s 00:33:20.704 sys 0m6.330s 00:33:20.704 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:20.705 ************************************ 00:33:20.705 END TEST nvmf_bdev_io_wait 00:33:20.705 ************************************ 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:20.705 ************************************ 00:33:20.705 START TEST nvmf_queue_depth 00:33:20.705 ************************************ 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:20.705 * Looking for test storage... 00:33:20.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.705 --rc genhtml_branch_coverage=1 00:33:20.705 --rc genhtml_function_coverage=1 00:33:20.705 --rc genhtml_legend=1 00:33:20.705 --rc geninfo_all_blocks=1 00:33:20.705 --rc geninfo_unexecuted_blocks=1 00:33:20.705 00:33:20.705 ' 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.705 --rc genhtml_branch_coverage=1 00:33:20.705 --rc genhtml_function_coverage=1 00:33:20.705 --rc genhtml_legend=1 00:33:20.705 --rc geninfo_all_blocks=1 00:33:20.705 --rc geninfo_unexecuted_blocks=1 00:33:20.705 00:33:20.705 ' 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.705 --rc genhtml_branch_coverage=1 00:33:20.705 --rc genhtml_function_coverage=1 00:33:20.705 --rc genhtml_legend=1 00:33:20.705 --rc geninfo_all_blocks=1 00:33:20.705 --rc geninfo_unexecuted_blocks=1 00:33:20.705 00:33:20.705 ' 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.705 --rc genhtml_branch_coverage=1 00:33:20.705 --rc genhtml_function_coverage=1 00:33:20.705 --rc genhtml_legend=1 00:33:20.705 --rc geninfo_all_blocks=1 00:33:20.705 --rc geninfo_unexecuted_blocks=1 00:33:20.705 00:33:20.705 ' 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:20.705 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.964 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:20.965 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:27.537 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:27.537 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.537 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:27.538 Found net devices under 0000:86:00.0: cvl_0_0 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:27.538 Found net devices under 0000:86:00.1: cvl_0_1 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:27.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:27.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:33:27.538 00:33:27.538 --- 10.0.0.2 ping statistics --- 00:33:27.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.538 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:27.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:27.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:33:27.538 00:33:27.538 --- 10.0.0.1 ping statistics --- 00:33:27.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.538 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4138423 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4138423 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4138423 ']' 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.538 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.538 [2024-11-19 11:01:16.499725] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:27.538 [2024-11-19 11:01:16.500685] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:27.539 [2024-11-19 11:01:16.500727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:27.539 [2024-11-19 11:01:16.584297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.539 [2024-11-19 11:01:16.626326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:27.539 [2024-11-19 11:01:16.626363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:27.539 [2024-11-19 11:01:16.626369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:27.539 [2024-11-19 11:01:16.626375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:27.539 [2024-11-19 11:01:16.626380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:27.539 [2024-11-19 11:01:16.626869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.539 [2024-11-19 11:01:16.693926] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:27.539 [2024-11-19 11:01:16.694144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.798 [2024-11-19 11:01:17.383482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.798 Malloc0 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.798 [2024-11-19 11:01:17.459691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4138668 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4138668 /var/tmp/bdevperf.sock 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4138668 ']' 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:27.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.798 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.798 [2024-11-19 11:01:17.511358] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:27.798 [2024-11-19 11:01:17.511403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138668 ] 00:33:27.798 [2024-11-19 11:01:17.586880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.057 [2024-11-19 11:01:17.631913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.057 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:28.057 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:28.057 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:28.057 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.057 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:28.057 NVMe0n1 00:33:28.057 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.057 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:28.315 Running I/O for 10 seconds... 00:33:30.188 12128.00 IOPS, 47.38 MiB/s [2024-11-19T10:01:21.355Z] 12289.00 IOPS, 48.00 MiB/s [2024-11-19T10:01:22.290Z] 12295.00 IOPS, 48.03 MiB/s [2024-11-19T10:01:23.224Z] 12453.75 IOPS, 48.65 MiB/s [2024-11-19T10:01:24.158Z] 12494.80 IOPS, 48.81 MiB/s [2024-11-19T10:01:25.093Z] 12478.50 IOPS, 48.74 MiB/s [2024-11-19T10:01:26.029Z] 12562.71 IOPS, 49.07 MiB/s [2024-11-19T10:01:26.963Z] 12550.75 IOPS, 49.03 MiB/s [2024-11-19T10:01:28.026Z] 12582.11 IOPS, 49.15 MiB/s [2024-11-19T10:01:28.026Z] 12583.70 IOPS, 49.16 MiB/s 00:33:38.234 Latency(us) 00:33:38.234 [2024-11-19T10:01:28.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.234 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:38.234 Verification LBA range: start 0x0 length 0x4000 00:33:38.234 NVMe0n1 : 10.07 12603.98 49.23 0.00 0.00 80984.47 18599.74 52428.80 00:33:38.234 [2024-11-19T10:01:28.026Z] =================================================================================================================== 00:33:38.234 [2024-11-19T10:01:28.026Z] Total : 12603.98 49.23 0.00 0.00 80984.47 18599.74 52428.80 00:33:38.234 { 00:33:38.234 "results": [ 00:33:38.234 { 00:33:38.234 "job": "NVMe0n1", 00:33:38.234 "core_mask": "0x1", 00:33:38.234 "workload": "verify", 00:33:38.234 "status": "finished", 00:33:38.234 "verify_range": { 00:33:38.234 "start": 0, 00:33:38.234 "length": 16384 00:33:38.234 }, 00:33:38.234 "queue_depth": 1024, 00:33:38.234 "io_size": 4096, 00:33:38.234 "runtime": 10.065155, 00:33:38.234 "iops": 12603.978776283127, 00:33:38.234 "mibps": 49.23429209485597, 00:33:38.234 "io_failed": 0, 00:33:38.234 "io_timeout": 0, 00:33:38.234 "avg_latency_us": 80984.47277036998, 00:33:38.234 "min_latency_us": 18599.74095238095, 00:33:38.234 "max_latency_us": 52428.8 00:33:38.234 } 00:33:38.234 ], 00:33:38.234 "core_count": 1 00:33:38.234 } 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4138668 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4138668 ']' 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4138668 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4138668 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4138668' 00:33:38.535 killing process with pid 4138668 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4138668 00:33:38.535 Received shutdown signal, test time was about 10.000000 seconds 00:33:38.535 00:33:38.535 Latency(us) 00:33:38.535 [2024-11-19T10:01:28.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.535 [2024-11-19T10:01:28.327Z] =================================================================================================================== 00:33:38.535 [2024-11-19T10:01:28.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4138668 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:38.535 rmmod nvme_tcp 00:33:38.535 rmmod nvme_fabrics 00:33:38.535 rmmod nvme_keyring 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4138423 ']' 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4138423 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4138423 ']' 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4138423 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.535 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4138423 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4138423' 00:33:38.794 killing process with pid 4138423 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4138423 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4138423 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.794 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.330 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:41.330 00:33:41.330 real 0m20.304s 00:33:41.330 user 0m22.915s 00:33:41.330 sys 0m6.229s 00:33:41.330 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:41.330 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:41.330 ************************************ 00:33:41.330 END TEST nvmf_queue_depth 00:33:41.330 ************************************ 00:33:41.330 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:41.330 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:41.330 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:41.331 ************************************ 00:33:41.331 START TEST nvmf_target_multipath 00:33:41.331 ************************************ 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:41.331 * Looking for test storage... 00:33:41.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:41.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.331 --rc genhtml_branch_coverage=1 00:33:41.331 --rc genhtml_function_coverage=1 00:33:41.331 --rc genhtml_legend=1 00:33:41.331 --rc geninfo_all_blocks=1 00:33:41.331 --rc geninfo_unexecuted_blocks=1 00:33:41.331 00:33:41.331 ' 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:41.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.331 --rc genhtml_branch_coverage=1 00:33:41.331 --rc genhtml_function_coverage=1 00:33:41.331 --rc genhtml_legend=1 00:33:41.331 --rc geninfo_all_blocks=1 00:33:41.331 --rc geninfo_unexecuted_blocks=1 00:33:41.331 00:33:41.331 ' 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:41.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.331 --rc genhtml_branch_coverage=1 00:33:41.331 --rc genhtml_function_coverage=1 00:33:41.331 --rc genhtml_legend=1 00:33:41.331 --rc geninfo_all_blocks=1 00:33:41.331 --rc geninfo_unexecuted_blocks=1 00:33:41.331 00:33:41.331 ' 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:41.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.331 --rc genhtml_branch_coverage=1 00:33:41.331 --rc genhtml_function_coverage=1 00:33:41.331 --rc genhtml_legend=1 00:33:41.331 --rc geninfo_all_blocks=1 00:33:41.331 --rc geninfo_unexecuted_blocks=1 00:33:41.331 00:33:41.331 ' 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.331 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:41.332 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.906 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:47.907 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:47.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:47.907 Found net devices under 0000:86:00.0: cvl_0_0 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:47.907 Found net devices under 0000:86:00.1: cvl_0_1 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:47.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:33:47.907 00:33:47.907 --- 10.0.0.2 ping statistics --- 00:33:47.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.907 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:33:47.907 00:33:47.907 --- 10.0.0.1 ping statistics --- 00:33:47.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.907 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:47.907 only one NIC for nvmf test 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:47.907 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:47.908 rmmod nvme_tcp 00:33:47.908 rmmod nvme_fabrics 00:33:47.908 rmmod nvme_keyring 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.908 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.286 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:49.286 00:33:49.286 real 0m8.320s 00:33:49.286 user 0m1.858s 00:33:49.286 sys 0m4.472s 00:33:49.286 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.286 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:49.286 ************************************ 00:33:49.286 END TEST nvmf_target_multipath 00:33:49.286 ************************************ 00:33:49.286 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:49.286 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:49.286 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.286 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:49.546 ************************************ 00:33:49.546 START TEST nvmf_zcopy 00:33:49.546 ************************************ 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:49.546 * Looking for test storage... 00:33:49.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:49.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.546 --rc genhtml_branch_coverage=1 00:33:49.546 --rc genhtml_function_coverage=1 00:33:49.546 --rc genhtml_legend=1 00:33:49.546 --rc geninfo_all_blocks=1 00:33:49.546 --rc geninfo_unexecuted_blocks=1 00:33:49.546 00:33:49.546 ' 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:49.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.546 --rc genhtml_branch_coverage=1 00:33:49.546 --rc genhtml_function_coverage=1 00:33:49.546 --rc genhtml_legend=1 00:33:49.546 --rc geninfo_all_blocks=1 00:33:49.546 --rc geninfo_unexecuted_blocks=1 00:33:49.546 00:33:49.546 ' 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:49.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.546 --rc genhtml_branch_coverage=1 00:33:49.546 --rc genhtml_function_coverage=1 00:33:49.546 --rc genhtml_legend=1 00:33:49.546 --rc geninfo_all_blocks=1 00:33:49.546 --rc geninfo_unexecuted_blocks=1 00:33:49.546 00:33:49.546 ' 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:49.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.546 --rc genhtml_branch_coverage=1 00:33:49.546 --rc genhtml_function_coverage=1 00:33:49.546 --rc genhtml_legend=1 00:33:49.546 --rc geninfo_all_blocks=1 00:33:49.546 --rc geninfo_unexecuted_blocks=1 00:33:49.546 00:33:49.546 ' 00:33:49.546 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:49.547 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:56.142 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:56.143 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:56.143 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:56.143 Found net devices under 0000:86:00.0: cvl_0_0 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:56.143 Found net devices under 0000:86:00.1: cvl_0_1 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:56.143 11:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:56.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:56.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:33:56.143 00:33:56.143 --- 10.0.0.2 ping statistics --- 00:33:56.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.143 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:56.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:56.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:33:56.143 00:33:56.143 --- 10.0.0.1 ping statistics --- 00:33:56.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.143 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4147316 00:33:56.143 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4147316 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 4147316 ']' 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.144 [2024-11-19 11:01:45.245993] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:56.144 [2024-11-19 11:01:45.246910] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:56.144 [2024-11-19 11:01:45.246948] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:56.144 [2024-11-19 11:01:45.325796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.144 [2024-11-19 11:01:45.366350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:56.144 [2024-11-19 11:01:45.366384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:56.144 [2024-11-19 11:01:45.366391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:56.144 [2024-11-19 11:01:45.366397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:56.144 [2024-11-19 11:01:45.366402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:56.144 [2024-11-19 11:01:45.366909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.144 [2024-11-19 11:01:45.433539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:56.144 [2024-11-19 11:01:45.433765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.144 [2024-11-19 11:01:45.499632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.144 [2024-11-19 11:01:45.527871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.144 malloc0 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:56.144 { 00:33:56.144 "params": { 00:33:56.144 "name": "Nvme$subsystem", 00:33:56.144 "trtype": "$TEST_TRANSPORT", 00:33:56.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.144 "adrfam": "ipv4", 00:33:56.144 "trsvcid": "$NVMF_PORT", 00:33:56.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.144 "hdgst": ${hdgst:-false}, 00:33:56.144 "ddgst": ${ddgst:-false} 00:33:56.144 }, 00:33:56.144 "method": "bdev_nvme_attach_controller" 00:33:56.144 } 00:33:56.144 EOF 00:33:56.144 )") 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:56.144 11:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:56.144 "params": { 00:33:56.144 "name": "Nvme1", 00:33:56.144 "trtype": "tcp", 00:33:56.144 "traddr": "10.0.0.2", 00:33:56.144 "adrfam": "ipv4", 00:33:56.144 "trsvcid": "4420", 00:33:56.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:56.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:56.144 "hdgst": false, 00:33:56.144 "ddgst": false 00:33:56.144 }, 00:33:56.144 "method": "bdev_nvme_attach_controller" 00:33:56.144 }' 00:33:56.144 [2024-11-19 11:01:45.627469] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:56.144 [2024-11-19 11:01:45.627521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147343 ] 00:33:56.144 [2024-11-19 11:01:45.704760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.144 [2024-11-19 11:01:45.745817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.403 Running I/O for 10 seconds... 00:33:58.273 8517.00 IOPS, 66.54 MiB/s [2024-11-19T10:01:49.000Z] 8594.00 IOPS, 67.14 MiB/s [2024-11-19T10:01:50.377Z] 8641.00 IOPS, 67.51 MiB/s [2024-11-19T10:01:51.313Z] 8650.75 IOPS, 67.58 MiB/s [2024-11-19T10:01:52.247Z] 8641.60 IOPS, 67.51 MiB/s [2024-11-19T10:01:53.183Z] 8648.17 IOPS, 67.56 MiB/s [2024-11-19T10:01:54.119Z] 8655.14 IOPS, 67.62 MiB/s [2024-11-19T10:01:55.054Z] 8665.75 IOPS, 67.70 MiB/s [2024-11-19T10:01:55.986Z] 8669.78 IOPS, 67.73 MiB/s [2024-11-19T10:01:55.986Z] 8673.80 IOPS, 67.76 MiB/s 00:34:06.194 Latency(us) 00:34:06.194 [2024-11-19T10:01:55.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.194 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:06.194 Verification LBA range: start 0x0 length 0x1000 00:34:06.194 Nvme1n1 : 10.01 8674.81 67.77 0.00 0.00 14713.94 399.85 21595.67 00:34:06.194 [2024-11-19T10:01:55.986Z] =================================================================================================================== 00:34:06.194 [2024-11-19T10:01:55.986Z] Total : 8674.81 67.77 0.00 0.00 14713.94 399.85 21595.67 00:34:06.452 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4148974 00:34:06.452 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:06.452 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.452 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:06.452 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:06.452 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:06.452 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:06.452 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:06.452 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:06.452 { 00:34:06.452 "params": { 00:34:06.452 "name": "Nvme$subsystem", 00:34:06.452 "trtype": "$TEST_TRANSPORT", 00:34:06.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:06.452 "adrfam": "ipv4", 00:34:06.452 "trsvcid": "$NVMF_PORT", 00:34:06.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:06.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:06.452 "hdgst": ${hdgst:-false}, 00:34:06.452 "ddgst": ${ddgst:-false} 00:34:06.452 }, 00:34:06.452 "method": "bdev_nvme_attach_controller" 00:34:06.452 } 00:34:06.452 EOF 00:34:06.452 )") 00:34:06.452 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:06.452 [2024-11-19 11:01:56.135236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.453 [2024-11-19 11:01:56.135267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.453 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:06.453 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:06.453 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:06.453 "params": { 00:34:06.453 "name": "Nvme1", 00:34:06.453 "trtype": "tcp", 00:34:06.453 "traddr": "10.0.0.2", 00:34:06.453 "adrfam": "ipv4", 00:34:06.453 "trsvcid": "4420", 00:34:06.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:06.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:06.453 "hdgst": false, 00:34:06.453 "ddgst": false 00:34:06.453 }, 00:34:06.453 "method": "bdev_nvme_attach_controller" 00:34:06.453 }' 00:34:06.453 [2024-11-19 11:01:56.147200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.453 [2024-11-19 11:01:56.147217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.453 [2024-11-19 11:01:56.159198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.453 [2024-11-19 11:01:56.159215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.453 [2024-11-19 11:01:56.171199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.453 [2024-11-19 11:01:56.171215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.453 [2024-11-19 11:01:56.174908] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:34:06.453 [2024-11-19 11:01:56.174949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148974 ] 00:34:06.453 [2024-11-19 11:01:56.183199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.453 [2024-11-19 11:01:56.183214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.453 [2024-11-19 11:01:56.195196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.453 [2024-11-19 11:01:56.195211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.453 [2024-11-19 11:01:56.207205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.453 [2024-11-19 11:01:56.207217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.453 [2024-11-19 11:01:56.219200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.453 [2024-11-19 11:01:56.219214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.453 [2024-11-19 11:01:56.231198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.453 [2024-11-19 11:01:56.231213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.712 [2024-11-19 11:01:56.243206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.243216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.253637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.713 [2024-11-19 11:01:56.255196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.255211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.267207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.267222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.279199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.279214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.291199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.291216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.295024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.713 [2024-11-19 11:01:56.303208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.303220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.315213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.315233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.327200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.327220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.339209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.339222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.351199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.351217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.363200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.363218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.375214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.375247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.387218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.387243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.399218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.399235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.411214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.411229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.423212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.423228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.435211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.435245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 Running I/O for 5 seconds... 00:34:06.713 [2024-11-19 11:01:56.447211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.447245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.463800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.463819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.479290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.479310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.713 [2024-11-19 11:01:56.492789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.713 [2024-11-19 11:01:56.492808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.507979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.507997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.523091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.523111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.536882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.536900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.551670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.551688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.566745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.566763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.580530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.580548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.590866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.590884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.604806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.604824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.619027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.619045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.632021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.632040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.647006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.647029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.660944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.660963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.675650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.675668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.688264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.688282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.703612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.703630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.719551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.719568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.733183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.733206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.972 [2024-11-19 11:01:56.748131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.972 [2024-11-19 11:01:56.748149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.763764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.763782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.774715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.774733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.789025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.789044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.803722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.803739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.816539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.816557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.831655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.831673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.842241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.842260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.857129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.857148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.871779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.871797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.887292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.887311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.900034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.900052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.914948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.914975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.929040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.929059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.944171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.944189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.230 [2024-11-19 11:01:56.959908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.230 [2024-11-19 11:01:56.959926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.231 [2024-11-19 11:01:56.975442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.231 [2024-11-19 11:01:56.975461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.231 [2024-11-19 11:01:56.987726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.231 [2024-11-19 11:01:56.987744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.231 [2024-11-19 11:01:57.001108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.231 [2024-11-19 11:01:57.001126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.231 [2024-11-19 11:01:57.016457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.231 [2024-11-19 11:01:57.016475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.489 [2024-11-19 11:01:57.031503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.489 [2024-11-19 11:01:57.031521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.489 [2024-11-19 11:01:57.043813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.043831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.056976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.056994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.071720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.071738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.086861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.086879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.097880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.097898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.112947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.112965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.127722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.127740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.143490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.143508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.158897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.158915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.171891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.171909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.184450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.184469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.199184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.199212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.211985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.212002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.224720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.224738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.239557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.239574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.252039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.252056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.490 [2024-11-19 11:01:57.264581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.490 [2024-11-19 11:01:57.264599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.279242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.279260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.290192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.290215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.304964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.304982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.319868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.319886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.335905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.335923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.350618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.350636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.365152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.365170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.379982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.380001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.394754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.394773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.409028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.409047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.424064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.424083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.435801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.435818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.448987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.449005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 16649.00 IOPS, 130.07 MiB/s [2024-11-19T10:01:57.543Z] [2024-11-19 11:01:57.463710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.463727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.479350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.479370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.492243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.492262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.507697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.507717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.520584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.520603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.751 [2024-11-19 11:01:57.531176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.751 [2024-11-19 11:01:57.531196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.545712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.545732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.560750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.560769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.575769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.575787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.591808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.591827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.607026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.607045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.621043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.621062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.635569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.635587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.651120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.651141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.664507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.664526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.675071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.675092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.689156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.689177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.704356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.704375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.719290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.719309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.730628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.730648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.744701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.744720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.759761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.759780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.775394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.775413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.010 [2024-11-19 11:01:57.788974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.010 [2024-11-19 11:01:57.788993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.268 [2024-11-19 11:01:57.804209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.268 [2024-11-19 11:01:57.804228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.268 [2024-11-19 11:01:57.814121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.268 [2024-11-19 11:01:57.814140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.268 [2024-11-19 11:01:57.828871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.268 [2024-11-19 11:01:57.828891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.268 [2024-11-19 11:01:57.843400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.268 [2024-11-19 11:01:57.843419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.268 [2024-11-19 11:01:57.854762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.268 [2024-11-19 11:01:57.854781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.268 [2024-11-19 11:01:57.868868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.268 [2024-11-19 11:01:57.868886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.268 [2024-11-19 11:01:57.883533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.268 [2024-11-19 11:01:57.883551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.268 [2024-11-19 11:01:57.899589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.268 [2024-11-19 11:01:57.899607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.268 [2024-11-19 11:01:57.915085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.268 [2024-11-19 11:01:57.915104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.268 [2024-11-19 11:01:57.928920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.269 [2024-11-19 11:01:57.928939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.269 [2024-11-19 11:01:57.943786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.269 [2024-11-19 11:01:57.943805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.269 [2024-11-19 11:01:57.959423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.269 [2024-11-19 11:01:57.959442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.269 [2024-11-19 11:01:57.971823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.269 [2024-11-19 11:01:57.971845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.269 [2024-11-19 11:01:57.984982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.269 [2024-11-19 11:01:57.985001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.269 [2024-11-19 11:01:57.999529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.269 [2024-11-19 11:01:57.999548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.269 [2024-11-19 11:01:58.015077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.269 [2024-11-19 11:01:58.015096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.269 [2024-11-19 11:01:58.029351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.269 [2024-11-19 11:01:58.029370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.269 [2024-11-19 11:01:58.043839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.269 [2024-11-19 11:01:58.043857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.058801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.058820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.072884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.072902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.087801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.087819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.102963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.102982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.117485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.117503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.132121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.132139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.147223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.147242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.160124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.160141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.175364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.175383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.188946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.188964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.203987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.204005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.219423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.219442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.231969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.231986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.247195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.247224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.261044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.261062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.275599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.275616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.287671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.287688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.300518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.300536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.527 [2024-11-19 11:01:58.314954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.527 [2024-11-19 11:01:58.314972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.328080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.328098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.343425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.343443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.356821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.356838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.371506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.371523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.384959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.384977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.399565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.399583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.415568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.415587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.431026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.431045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.444174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.444192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 16721.00 IOPS, 130.63 MiB/s [2024-11-19T10:01:58.577Z] [2024-11-19 11:01:58.456557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.456575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.471391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.471410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.484375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.484394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.498882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.498900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.512725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.512748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.526987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.527006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.540635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.540653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.785 [2024-11-19 11:01:58.555455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.785 [2024-11-19 11:01:58.555476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.786 [2024-11-19 11:01:58.568343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.786 [2024-11-19 11:01:58.568363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.579687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.579707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.592935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.592953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.607417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.607435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.618702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.618721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.633407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.633426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.648100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.648118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.663522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.663540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.679373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.679392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.692089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.692107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.707677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.707695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.723523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.723541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.737247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.737266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.751696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.751714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.767265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.767284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.779813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.779831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.793271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.793289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.808628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.808647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.044 [2024-11-19 11:01:58.823038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.044 [2024-11-19 11:01:58.823057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.837579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.837598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.852381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.852399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.866697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.866715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.881214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.881232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.895971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.895989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.911170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.911188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.923434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.923463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.936880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.936899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.951973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.951993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.966999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.967020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.980715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.980735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:58.995790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:58.995809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:59.010843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:59.010863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:59.025151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:59.025170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:59.039780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:59.039798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:59.055709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:59.055728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:59.071400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:59.071418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.301 [2024-11-19 11:01:59.085278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.301 [2024-11-19 11:01:59.085297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.100478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.100497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.115091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.115110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.127413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.127433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.140999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.141017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.155948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.155967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.171050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.171069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.185248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.185268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.200377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.200395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.214869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.214889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.228549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.228568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.243501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.243519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.558 [2024-11-19 11:01:59.254805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.558 [2024-11-19 11:01:59.254824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.559 [2024-11-19 11:01:59.269001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.559 [2024-11-19 11:01:59.269020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.559 [2024-11-19 11:01:59.283731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.559 [2024-11-19 11:01:59.283750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.559 [2024-11-19 11:01:59.298809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.559 [2024-11-19 11:01:59.298828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.559 [2024-11-19 11:01:59.311886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.559 [2024-11-19 11:01:59.311904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.559 [2024-11-19 11:01:59.324818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.559 [2024-11-19 11:01:59.324836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.559 [2024-11-19 11:01:59.339533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.559 [2024-11-19 11:01:59.339551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.355666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.355684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.367093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.367112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.381275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.381294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.395723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.395740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.411337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.411355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.423332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.423350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.436571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.436588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.451473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.451491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 16711.33 IOPS, 130.56 MiB/s [2024-11-19T10:01:59.608Z] [2024-11-19 11:01:59.463872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.463889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.478933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.478951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.493309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.493328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.507933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.507951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.523673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.523690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.535899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.535917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.549287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.549305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.564106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.564123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.578987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.579011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.816 [2024-11-19 11:01:59.593114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.816 [2024-11-19 11:01:59.593133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.608294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.608313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.624131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.624148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.635215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.635233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.649211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.649229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.663836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.663853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.679057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.679076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.693071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.693089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.707477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.707495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.718223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.718242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.733042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.733061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.748123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.748142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.762899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.762917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.776853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.776870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.791904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.791922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.807171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.807190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.820972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.820991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.836105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.836122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.077 [2024-11-19 11:01:59.851744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.077 [2024-11-19 11:01:59.851767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.335 [2024-11-19 11:01:59.866764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.335 [2024-11-19 11:01:59.866783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.335 [2024-11-19 11:01:59.880922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.335 [2024-11-19 11:01:59.880940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.335 [2024-11-19 11:01:59.895337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.335 [2024-11-19 11:01:59.895356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:01:59.908406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:01:59.908424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:01:59.919580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:01:59.919597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:01:59.932671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:01:59.932689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:01:59.947217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:01:59.947236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:01:59.959534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:01:59.959552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:01:59.973678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:01:59.973697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:01:59.988175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:01:59.988193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:02:00.002784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:02:00.002804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:02:00.017215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:02:00.017233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:02:00.032635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:02:00.032655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:02:00.047933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:02:00.047951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:02:00.063481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:02:00.063499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:02:00.075107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:02:00.075126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:02:00.089509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:02:00.089527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:02:00.104756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:02:00.104776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.336 [2024-11-19 11:02:00.119624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.336 [2024-11-19 11:02:00.119646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.132072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.132090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.145167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.145185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.160477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.160495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.175223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.175242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.186431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.186450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.201535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.201554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.215887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.215905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.230702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.230721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.245700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.245718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.259960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.259978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.275279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.275297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.288086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.288105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.300887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.300906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.315462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.315481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.325924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.325942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.340653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.340672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.355311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.355330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.366818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.366836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.594 [2024-11-19 11:02:00.380992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.594 [2024-11-19 11:02:00.381016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.395767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.395785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.411227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.411247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.424213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.424232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.439005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.439026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.452452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.452471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 16718.25 IOPS, 130.61 MiB/s [2024-11-19T10:02:00.644Z] [2024-11-19 11:02:00.467159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.467178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.478696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.478715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.493023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.493041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.507747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.507765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.523316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.523335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.536987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.537005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.551650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.551668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.566973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.566992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.580692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.580710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.595644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.595663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.611804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.611823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.626805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.626825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.852 [2024-11-19 11:02:00.638209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.852 [2024-11-19 11:02:00.638228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.652940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.652958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.667609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.667626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.683090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.683108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.696358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.696377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.707591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.707609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.720722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.720741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.731073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.731091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.744927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.744946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.759608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.759626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.775171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.775190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.788093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.788111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.800694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.800713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.815741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.815759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.831849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.831868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.847194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.110 [2024-11-19 11:02:00.847217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.110 [2024-11-19 11:02:00.859999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.111 [2024-11-19 11:02:00.860016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.111 [2024-11-19 11:02:00.872633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.111 [2024-11-19 11:02:00.872652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.111 [2024-11-19 11:02:00.887393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.111 [2024-11-19 11:02:00.887411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.111 [2024-11-19 11:02:00.898482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.111 [2024-11-19 11:02:00.898500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:00.913481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:00.913499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:00.928380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:00.928398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:00.943397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:00.943416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:00.954677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:00.954695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:00.969369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:00.969387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:00.983961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:00.983978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:00.996154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:00.996172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:01.007295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:01.007313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:01.021390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:01.021417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:01.036373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:01.036391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:01.051117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:01.051135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:01.064947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:01.064965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:01.079699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:01.079715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:01.095329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:01.095348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:01.107560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:01.107577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:01.121101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:01.121119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:01.136073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:01.136091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.369 [2024-11-19 11:02:01.151005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.369 [2024-11-19 11:02:01.151024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.164112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.164136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.179556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.179574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.192012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.192031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.207908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.207927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.222720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.222738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.237131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.237149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.252191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.252215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.267315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.267334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.280896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.280914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.295700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.295717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.311105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.311123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.325463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.325482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.339876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.339911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.355331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.355350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.369044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.369062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.383920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.383938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.399528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.399546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.626 [2024-11-19 11:02:01.415290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.626 [2024-11-19 11:02:01.415310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.885 [2024-11-19 11:02:01.428939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.885 [2024-11-19 11:02:01.428957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.885 [2024-11-19 11:02:01.444237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.885 [2024-11-19 11:02:01.444260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.885 [2024-11-19 11:02:01.458873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.885 [2024-11-19 11:02:01.458892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.885 16717.60 IOPS, 130.61 MiB/s [2024-11-19T10:02:01.677Z] [2024-11-19 11:02:01.467208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.885 [2024-11-19 11:02:01.467225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.885 00:34:11.885 Latency(us) 00:34:11.885 [2024-11-19T10:02:01.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.886 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:11.886 Nvme1n1 : 5.01 16719.77 130.62 0.00 0.00 7648.69 1997.29 13731.35 00:34:11.886 [2024-11-19T10:02:01.678Z] =================================================================================================================== 00:34:11.886 [2024-11-19T10:02:01.678Z] Total : 16719.77 130.62 0.00 0.00 7648.69 1997.29 13731.35 00:34:11.886 [2024-11-19 11:02:01.479207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.479223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.491212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.491225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.503213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.503232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.515204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.515218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.527204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.527217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.539199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.539216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.551200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.551218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.563199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.563217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.575197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.575211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.587198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.587210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.599205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.599216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.611197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.611210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 [2024-11-19 11:02:01.623200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.886 [2024-11-19 11:02:01.623212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4148974) - No such process 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4148974 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:11.886 delay0 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.886 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:12.143 [2024-11-19 11:02:01.768912] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:20.261 Initializing NVMe Controllers 00:34:20.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:20.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:20.261 Initialization complete. Launching workers. 00:34:20.261 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 21949 00:34:20.261 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22115, failed to submit 99 00:34:20.261 success 22015, unsuccessful 100, failed 0 00:34:20.261 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:20.261 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:20.261 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:20.261 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:20.261 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:20.261 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:20.261 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:20.261 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:20.261 rmmod nvme_tcp 00:34:20.261 rmmod nvme_fabrics 00:34:20.261 rmmod nvme_keyring 00:34:20.261 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:20.261 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:20.261 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:20.262 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4147316 ']' 00:34:20.262 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4147316 00:34:20.262 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 4147316 ']' 00:34:20.262 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 4147316 00:34:20.262 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:20.262 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:20.262 11:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4147316 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4147316' 00:34:20.262 killing process with pid 4147316 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 4147316 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 4147316 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.262 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.638 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:21.638 00:34:21.638 real 0m32.186s 00:34:21.638 user 0m41.345s 00:34:21.638 sys 0m13.094s 00:34:21.638 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:21.638 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.638 ************************************ 00:34:21.638 END TEST nvmf_zcopy 00:34:21.638 ************************************ 00:34:21.638 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:21.638 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:21.638 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:21.638 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:21.638 ************************************ 00:34:21.638 START TEST nvmf_nmic 00:34:21.638 ************************************ 00:34:21.638 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:21.899 * Looking for test storage... 00:34:21.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:21.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.899 --rc genhtml_branch_coverage=1 00:34:21.899 --rc genhtml_function_coverage=1 00:34:21.899 --rc genhtml_legend=1 00:34:21.899 --rc geninfo_all_blocks=1 00:34:21.899 --rc geninfo_unexecuted_blocks=1 00:34:21.899 00:34:21.899 ' 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:21.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.899 --rc genhtml_branch_coverage=1 00:34:21.899 --rc genhtml_function_coverage=1 00:34:21.899 --rc genhtml_legend=1 00:34:21.899 --rc geninfo_all_blocks=1 00:34:21.899 --rc geninfo_unexecuted_blocks=1 00:34:21.899 00:34:21.899 ' 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:21.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.899 --rc genhtml_branch_coverage=1 00:34:21.899 --rc genhtml_function_coverage=1 00:34:21.899 --rc genhtml_legend=1 00:34:21.899 --rc geninfo_all_blocks=1 00:34:21.899 --rc geninfo_unexecuted_blocks=1 00:34:21.899 00:34:21.899 ' 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:21.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.899 --rc genhtml_branch_coverage=1 00:34:21.899 --rc genhtml_function_coverage=1 00:34:21.899 --rc genhtml_legend=1 00:34:21.899 --rc geninfo_all_blocks=1 00:34:21.899 --rc geninfo_unexecuted_blocks=1 00:34:21.899 00:34:21.899 ' 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.899 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:21.900 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:28.471 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:28.471 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.471 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:28.472 Found net devices under 0000:86:00.0: cvl_0_0 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:28.472 Found net devices under 0000:86:00.1: cvl_0_1 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:28.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:34:28.472 00:34:28.472 --- 10.0.0.2 ping statistics --- 00:34:28.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.472 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:34:28.472 00:34:28.472 --- 10.0.0.1 ping statistics --- 00:34:28.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.472 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4154529 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4154529 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 4154529 ']' 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:28.472 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.472 [2024-11-19 11:02:17.550841] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:28.472 [2024-11-19 11:02:17.551763] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:34:28.472 [2024-11-19 11:02:17.551795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.472 [2024-11-19 11:02:17.614168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:28.472 [2024-11-19 11:02:17.655191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.472 [2024-11-19 11:02:17.655233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.472 [2024-11-19 11:02:17.655241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.472 [2024-11-19 11:02:17.655246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.472 [2024-11-19 11:02:17.655251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.472 [2024-11-19 11:02:17.656884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.472 [2024-11-19 11:02:17.657023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:28.472 [2024-11-19 11:02:17.657131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.472 [2024-11-19 11:02:17.657132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:28.472 [2024-11-19 11:02:17.724141] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:28.472 [2024-11-19 11:02:17.724675] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:28.472 [2024-11-19 11:02:17.725068] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:28.472 [2024-11-19 11:02:17.725439] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:28.473 [2024-11-19 11:02:17.725487] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 [2024-11-19 11:02:17.805800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 Malloc0 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 [2024-11-19 11:02:17.893943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:28.473 test case1: single bdev can't be used in multiple subsystems 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 [2024-11-19 11:02:17.925510] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:28.473 [2024-11-19 11:02:17.925534] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:28.473 [2024-11-19 11:02:17.925542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.473 request: 00:34:28.473 { 00:34:28.473 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:28.473 "namespace": { 00:34:28.473 "bdev_name": "Malloc0", 00:34:28.473 "no_auto_visible": false 00:34:28.473 }, 00:34:28.473 "method": "nvmf_subsystem_add_ns", 00:34:28.473 "req_id": 1 00:34:28.473 } 00:34:28.473 Got JSON-RPC error response 00:34:28.473 response: 00:34:28.473 { 00:34:28.473 "code": -32602, 00:34:28.473 "message": "Invalid parameters" 00:34:28.473 } 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:28.473 Adding namespace failed - expected result. 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:28.473 test case2: host connect to nvmf target in multiple paths 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 [2024-11-19 11:02:17.937619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.473 11:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:28.473 11:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:28.732 11:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:28.732 11:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:28.732 11:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:28.732 11:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:28.732 11:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:30.641 11:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:30.641 11:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:30.641 11:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:30.641 11:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:30.641 11:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:30.641 11:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:30.641 11:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:30.641 [global] 00:34:30.641 thread=1 00:34:30.641 invalidate=1 00:34:30.641 rw=write 00:34:30.641 time_based=1 00:34:30.641 runtime=1 00:34:30.641 ioengine=libaio 00:34:30.641 direct=1 00:34:30.641 bs=4096 00:34:30.641 iodepth=1 00:34:30.641 norandommap=0 00:34:30.641 numjobs=1 00:34:30.641 00:34:30.641 verify_dump=1 00:34:30.641 verify_backlog=512 00:34:30.641 verify_state_save=0 00:34:30.641 do_verify=1 00:34:30.641 verify=crc32c-intel 00:34:30.641 [job0] 00:34:30.641 filename=/dev/nvme0n1 00:34:30.641 Could not set queue depth (nvme0n1) 00:34:30.900 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:30.900 fio-3.35 00:34:30.900 Starting 1 thread 00:34:32.278 00:34:32.278 job0: (groupid=0, jobs=1): err= 0: pid=4155140: Tue Nov 19 11:02:21 2024 00:34:32.278 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:34:32.278 slat (nsec): min=9820, max=26036, avg=22909.96, stdev=3020.93 00:34:32.278 clat (usec): min=40834, max=41037, avg=40963.46, stdev=45.49 00:34:32.278 lat (usec): min=40858, max=41061, avg=40986.37, stdev=46.80 00:34:32.278 clat percentiles (usec): 00:34:32.278 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:32.278 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:32.278 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:32.278 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:32.278 | 99.99th=[41157] 00:34:32.278 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:34:32.278 slat (nsec): min=9231, max=43769, avg=11142.18, stdev=2400.29 00:34:32.278 clat (usec): min=124, max=302, avg=137.03, stdev= 9.04 00:34:32.278 lat (usec): min=134, max=345, avg=148.18, stdev=10.58 00:34:32.278 clat percentiles (usec): 00:34:32.278 | 1.00th=[ 128], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 135], 00:34:32.278 | 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 137], 00:34:32.278 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 143], 95.00th=[ 147], 00:34:32.278 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 302], 99.95th=[ 302], 00:34:32.278 | 99.99th=[ 302] 00:34:32.278 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:32.278 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:32.278 lat (usec) : 250=95.51%, 500=0.19% 00:34:32.278 lat (msec) : 50=4.30% 00:34:32.278 cpu : usr=0.39%, sys=0.88%, ctx=535, majf=0, minf=1 00:34:32.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:32.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.278 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:32.278 00:34:32.278 Run status group 0 (all jobs): 00:34:32.278 READ: bw=90.2KiB/s (92.4kB/s), 90.2KiB/s-90.2KiB/s (92.4kB/s-92.4kB/s), io=92.0KiB (94.2kB), run=1020-1020msec 00:34:32.278 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:34:32.278 00:34:32.278 Disk stats (read/write): 00:34:32.278 nvme0n1: ios=70/512, merge=0/0, ticks=845/66, in_queue=911, util=91.18% 00:34:32.278 11:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:32.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.278 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.278 rmmod nvme_tcp 00:34:32.278 rmmod nvme_fabrics 00:34:32.538 rmmod nvme_keyring 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4154529 ']' 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4154529 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 4154529 ']' 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 4154529 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4154529 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4154529' 00:34:32.538 killing process with pid 4154529 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 4154529 00:34:32.538 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 4154529 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.797 11:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.704 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:34.704 00:34:34.704 real 0m13.076s 00:34:34.704 user 0m23.880s 00:34:34.704 sys 0m6.051s 00:34:34.704 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.704 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.704 ************************************ 00:34:34.704 END TEST nvmf_nmic 00:34:34.704 ************************************ 00:34:34.704 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:34.704 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:34.704 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.704 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:34.964 ************************************ 00:34:34.964 START TEST nvmf_fio_target 00:34:34.964 ************************************ 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:34.964 * Looking for test storage... 00:34:34.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.964 --rc genhtml_branch_coverage=1 00:34:34.964 --rc genhtml_function_coverage=1 00:34:34.964 --rc genhtml_legend=1 00:34:34.964 --rc geninfo_all_blocks=1 00:34:34.964 --rc geninfo_unexecuted_blocks=1 00:34:34.964 00:34:34.964 ' 00:34:34.964 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.964 --rc genhtml_branch_coverage=1 00:34:34.964 --rc genhtml_function_coverage=1 00:34:34.964 --rc genhtml_legend=1 00:34:34.965 --rc geninfo_all_blocks=1 00:34:34.965 --rc geninfo_unexecuted_blocks=1 00:34:34.965 00:34:34.965 ' 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:34.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.965 --rc genhtml_branch_coverage=1 00:34:34.965 --rc genhtml_function_coverage=1 00:34:34.965 --rc genhtml_legend=1 00:34:34.965 --rc geninfo_all_blocks=1 00:34:34.965 --rc geninfo_unexecuted_blocks=1 00:34:34.965 00:34:34.965 ' 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:34.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.965 --rc genhtml_branch_coverage=1 00:34:34.965 --rc genhtml_function_coverage=1 00:34:34.965 --rc genhtml_legend=1 00:34:34.965 --rc geninfo_all_blocks=1 00:34:34.965 --rc geninfo_unexecuted_blocks=1 00:34:34.965 00:34:34.965 ' 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:34.965 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:41.539 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:41.540 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:41.540 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:41.540 Found net devices under 0000:86:00.0: cvl_0_0 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:41.540 Found net devices under 0000:86:00.1: cvl_0_1 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:41.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:34:41.540 00:34:41.540 --- 10.0.0.2 ping statistics --- 00:34:41.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.540 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:41.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:34:41.540 00:34:41.540 --- 10.0.0.1 ping statistics --- 00:34:41.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.540 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:41.540 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4158898 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4158898 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 4158898 ']' 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.541 [2024-11-19 11:02:30.682212] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:41.541 [2024-11-19 11:02:30.683092] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:34:41.541 [2024-11-19 11:02:30.683125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:41.541 [2024-11-19 11:02:30.763416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:41.541 [2024-11-19 11:02:30.805690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:41.541 [2024-11-19 11:02:30.805728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:41.541 [2024-11-19 11:02:30.805735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:41.541 [2024-11-19 11:02:30.805741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:41.541 [2024-11-19 11:02:30.805746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:41.541 [2024-11-19 11:02:30.807295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.541 [2024-11-19 11:02:30.807313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:41.541 [2024-11-19 11:02:30.807404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.541 [2024-11-19 11:02:30.807405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:41.541 [2024-11-19 11:02:30.873886] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:41.541 [2024-11-19 11:02:30.874512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:41.541 [2024-11-19 11:02:30.874834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:41.541 [2024-11-19 11:02:30.874993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:41.541 [2024-11-19 11:02:30.875095] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.541 11:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:41.541 [2024-11-19 11:02:31.112191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.541 11:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:41.800 11:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:41.800 11:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:41.800 11:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:41.800 11:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:42.060 11:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:42.060 11:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:42.319 11:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:42.319 11:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:42.578 11:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:42.837 11:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:42.837 11:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:42.837 11:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:42.837 11:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:43.096 11:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:43.096 11:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:43.355 11:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:43.355 11:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:43.355 11:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:43.614 11:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:43.614 11:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:43.872 11:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:43.872 [2024-11-19 11:02:33.640141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.131 11:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:44.131 11:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:44.391 11:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:44.650 11:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:44.650 11:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:44.650 11:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:44.650 11:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:44.650 11:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:44.650 11:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:47.185 11:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:47.185 11:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:47.185 11:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:47.185 11:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:47.185 11:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:47.185 11:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:47.185 11:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:47.185 [global] 00:34:47.185 thread=1 00:34:47.185 invalidate=1 00:34:47.185 rw=write 00:34:47.185 time_based=1 00:34:47.185 runtime=1 00:34:47.185 ioengine=libaio 00:34:47.185 direct=1 00:34:47.185 bs=4096 00:34:47.185 iodepth=1 00:34:47.185 norandommap=0 00:34:47.185 numjobs=1 00:34:47.185 00:34:47.185 verify_dump=1 00:34:47.185 verify_backlog=512 00:34:47.185 verify_state_save=0 00:34:47.185 do_verify=1 00:34:47.185 verify=crc32c-intel 00:34:47.185 [job0] 00:34:47.185 filename=/dev/nvme0n1 00:34:47.185 [job1] 00:34:47.185 filename=/dev/nvme0n2 00:34:47.185 [job2] 00:34:47.185 filename=/dev/nvme0n3 00:34:47.185 [job3] 00:34:47.185 filename=/dev/nvme0n4 00:34:47.185 Could not set queue depth (nvme0n1) 00:34:47.185 Could not set queue depth (nvme0n2) 00:34:47.185 Could not set queue depth (nvme0n3) 00:34:47.185 Could not set queue depth (nvme0n4) 00:34:47.185 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.185 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.185 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.185 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.185 fio-3.35 00:34:47.185 Starting 4 threads 00:34:48.564 00:34:48.564 job0: (groupid=0, jobs=1): err= 0: pid=4160013: Tue Nov 19 11:02:37 2024 00:34:48.564 read: IOPS=52, BW=211KiB/s (216kB/s)(212KiB/1004msec) 00:34:48.564 slat (nsec): min=7486, max=23038, avg=13877.81, stdev=6939.41 00:34:48.564 clat (usec): min=197, max=41900, avg=17155.31, stdev=20290.08 00:34:48.564 lat (usec): min=205, max=41922, avg=17169.19, stdev=20296.44 00:34:48.564 clat percentiles (usec): 00:34:48.564 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 217], 00:34:48.564 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[40633], 00:34:48.564 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:48.564 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:48.564 | 99.99th=[41681] 00:34:48.564 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:34:48.564 slat (nsec): min=9297, max=42351, avg=10651.14, stdev=2253.02 00:34:48.564 clat (usec): min=146, max=1562, avg=170.72, stdev=63.54 00:34:48.564 lat (usec): min=157, max=1572, avg=181.37, stdev=63.66 00:34:48.564 clat percentiles (usec): 00:34:48.564 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:34:48.564 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:34:48.564 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 200], 00:34:48.564 | 99.00th=[ 231], 99.50th=[ 265], 99.90th=[ 1565], 99.95th=[ 1565], 00:34:48.564 | 99.99th=[ 1565] 00:34:48.564 bw ( KiB/s): min= 4087, max= 4087, per=22.87%, avg=4087.00, stdev= 0.00, samples=1 00:34:48.564 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:34:48.564 lat (usec) : 250=95.22%, 500=0.71% 00:34:48.564 lat (msec) : 2=0.18%, 50=3.89% 00:34:48.564 cpu : usr=0.50%, sys=0.40%, ctx=565, majf=0, minf=2 00:34:48.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.564 issued rwts: total=53,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.564 job1: (groupid=0, jobs=1): err= 0: pid=4160014: Tue Nov 19 11:02:37 2024 00:34:48.564 read: IOPS=659, BW=2637KiB/s (2701kB/s)(2640KiB/1001msec) 00:34:48.564 slat (nsec): min=7304, max=26424, avg=8648.04, stdev=1700.15 00:34:48.564 clat (usec): min=184, max=41070, avg=1215.91, stdev=6069.90 00:34:48.564 lat (usec): min=193, max=41080, avg=1224.55, stdev=6070.49 00:34:48.564 clat percentiles (usec): 00:34:48.564 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 231], 00:34:48.564 | 30.00th=[ 245], 40.00th=[ 260], 50.00th=[ 281], 60.00th=[ 297], 00:34:48.564 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 420], 95.00th=[ 461], 00:34:48.564 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:48.564 | 99.99th=[41157] 00:34:48.564 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:48.564 slat (nsec): min=10314, max=41944, avg=12069.38, stdev=2085.43 00:34:48.564 clat (usec): min=131, max=1979, avg=170.68, stdev=68.83 00:34:48.564 lat (usec): min=143, max=1991, avg=182.75, stdev=68.94 00:34:48.564 clat percentiles (usec): 00:34:48.564 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:34:48.564 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:34:48.564 | 70.00th=[ 167], 80.00th=[ 178], 90.00th=[ 221], 95.00th=[ 243], 00:34:48.564 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 660], 99.95th=[ 1975], 00:34:48.564 | 99.99th=[ 1975] 00:34:48.564 bw ( KiB/s): min= 4087, max= 4087, per=22.87%, avg=4087.00, stdev= 0.00, samples=1 00:34:48.564 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:34:48.564 lat (usec) : 250=72.33%, 500=26.25%, 750=0.42% 00:34:48.564 lat (msec) : 2=0.06%, 4=0.06%, 50=0.89% 00:34:48.564 cpu : usr=1.70%, sys=2.00%, ctx=1686, majf=0, minf=1 00:34:48.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.564 issued rwts: total=660,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.564 job2: (groupid=0, jobs=1): err= 0: pid=4160015: Tue Nov 19 11:02:37 2024 00:34:48.564 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:34:48.564 slat (nsec): min=7398, max=38084, avg=8454.20, stdev=1236.47 00:34:48.564 clat (usec): min=178, max=474, avg=244.40, stdev=38.15 00:34:48.564 lat (usec): min=187, max=482, avg=252.85, stdev=38.16 00:34:48.564 clat percentiles (usec): 00:34:48.564 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 212], 00:34:48.564 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 247], 00:34:48.564 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 314], 00:34:48.564 | 99.00th=[ 330], 99.50th=[ 420], 99.90th=[ 441], 99.95th=[ 449], 00:34:48.564 | 99.99th=[ 474] 00:34:48.564 write: IOPS=2462, BW=9850KiB/s (10.1MB/s)(9860KiB/1001msec); 0 zone resets 00:34:48.565 slat (nsec): min=10954, max=45030, avg=12755.60, stdev=2239.15 00:34:48.565 clat (usec): min=128, max=381, avg=176.49, stdev=38.37 00:34:48.565 lat (usec): min=142, max=395, avg=189.24, stdev=38.89 00:34:48.565 clat percentiles (usec): 00:34:48.565 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:34:48.565 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:34:48.565 | 70.00th=[ 194], 80.00th=[ 215], 90.00th=[ 239], 95.00th=[ 245], 00:34:48.565 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 330], 99.95th=[ 330], 00:34:48.565 | 99.99th=[ 383] 00:34:48.565 bw ( KiB/s): min= 8824, max= 8824, per=49.37%, avg=8824.00, stdev= 0.00, samples=1 00:34:48.565 iops : min= 2206, max= 2206, avg=2206.00, stdev= 0.00, samples=1 00:34:48.565 lat (usec) : 250=80.50%, 500=19.50% 00:34:48.565 cpu : usr=4.10%, sys=7.20%, ctx=4514, majf=0, minf=1 00:34:48.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.565 issued rwts: total=2048,2465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.565 job3: (groupid=0, jobs=1): err= 0: pid=4160016: Tue Nov 19 11:02:37 2024 00:34:48.565 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:34:48.565 slat (nsec): min=10273, max=28724, avg=21077.09, stdev=5302.19 00:34:48.565 clat (usec): min=40802, max=41946, avg=41100.35, stdev=349.88 00:34:48.565 lat (usec): min=40812, max=41975, avg=41121.43, stdev=351.51 00:34:48.565 clat percentiles (usec): 00:34:48.565 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:48.565 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:48.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:48.565 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:48.565 | 99.99th=[42206] 00:34:48.565 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:34:48.565 slat (nsec): min=11204, max=38324, avg=14273.93, stdev=3107.17 00:34:48.565 clat (usec): min=140, max=367, avg=183.67, stdev=17.57 00:34:48.565 lat (usec): min=152, max=379, avg=197.94, stdev=18.27 00:34:48.565 clat percentiles (usec): 00:34:48.565 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:34:48.565 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:34:48.565 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 210], 00:34:48.565 | 99.00th=[ 237], 99.50th=[ 262], 99.90th=[ 367], 99.95th=[ 367], 00:34:48.565 | 99.99th=[ 367] 00:34:48.565 bw ( KiB/s): min= 4096, max= 4096, per=22.92%, avg=4096.00, stdev= 0.00, samples=1 00:34:48.565 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:48.565 lat (usec) : 250=95.13%, 500=0.75% 00:34:48.565 lat (msec) : 50=4.12% 00:34:48.565 cpu : usr=0.79%, sys=0.69%, ctx=536, majf=0, minf=1 00:34:48.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.565 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.565 00:34:48.565 Run status group 0 (all jobs): 00:34:48.565 READ: bw=10.8MiB/s (11.3MB/s), 87.1KiB/s-8184KiB/s (89.2kB/s-8380kB/s), io=10.9MiB (11.4MB), run=1001-1010msec 00:34:48.565 WRITE: bw=17.5MiB/s (18.3MB/s), 2028KiB/s-9850KiB/s (2076kB/s-10.1MB/s), io=17.6MiB (18.5MB), run=1001-1010msec 00:34:48.565 00:34:48.565 Disk stats (read/write): 00:34:48.565 nvme0n1: ios=99/512, merge=0/0, ticks=973/85, in_queue=1058, util=90.78% 00:34:48.565 nvme0n2: ios=549/512, merge=0/0, ticks=973/90, in_queue=1063, util=98.47% 00:34:48.565 nvme0n3: ios=1881/2048, merge=0/0, ticks=892/357, in_queue=1249, util=98.23% 00:34:48.565 nvme0n4: ios=76/512, merge=0/0, ticks=1588/85, in_queue=1673, util=98.22% 00:34:48.565 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:48.565 [global] 00:34:48.565 thread=1 00:34:48.565 invalidate=1 00:34:48.565 rw=randwrite 00:34:48.565 time_based=1 00:34:48.565 runtime=1 00:34:48.565 ioengine=libaio 00:34:48.565 direct=1 00:34:48.565 bs=4096 00:34:48.565 iodepth=1 00:34:48.565 norandommap=0 00:34:48.565 numjobs=1 00:34:48.565 00:34:48.565 verify_dump=1 00:34:48.565 verify_backlog=512 00:34:48.565 verify_state_save=0 00:34:48.565 do_verify=1 00:34:48.565 verify=crc32c-intel 00:34:48.565 [job0] 00:34:48.565 filename=/dev/nvme0n1 00:34:48.565 [job1] 00:34:48.565 filename=/dev/nvme0n2 00:34:48.565 [job2] 00:34:48.565 filename=/dev/nvme0n3 00:34:48.565 [job3] 00:34:48.565 filename=/dev/nvme0n4 00:34:48.565 Could not set queue depth (nvme0n1) 00:34:48.565 Could not set queue depth (nvme0n2) 00:34:48.565 Could not set queue depth (nvme0n3) 00:34:48.565 Could not set queue depth (nvme0n4) 00:34:48.565 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.565 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.565 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.565 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.565 fio-3.35 00:34:48.565 Starting 4 threads 00:34:49.947 00:34:49.947 job0: (groupid=0, jobs=1): err= 0: pid=4160389: Tue Nov 19 11:02:39 2024 00:34:49.947 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:34:49.947 slat (nsec): min=11905, max=27804, avg=22440.87, stdev=2676.14 00:34:49.947 clat (usec): min=40854, max=42044, avg=41025.10, stdev=230.78 00:34:49.947 lat (usec): min=40878, max=42068, avg=41047.54, stdev=230.66 00:34:49.947 clat percentiles (usec): 00:34:49.947 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:49.947 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:49.947 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:49.947 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:49.947 | 99.99th=[42206] 00:34:49.947 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:34:49.947 slat (nsec): min=9917, max=39704, avg=11681.80, stdev=2457.48 00:34:49.947 clat (usec): min=142, max=306, avg=168.71, stdev=13.25 00:34:49.947 lat (usec): min=158, max=345, avg=180.39, stdev=14.23 00:34:49.947 clat percentiles (usec): 00:34:49.947 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:34:49.947 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:34:49.947 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:34:49.947 | 99.00th=[ 200], 99.50th=[ 219], 99.90th=[ 306], 99.95th=[ 306], 00:34:49.947 | 99.99th=[ 306] 00:34:49.947 bw ( KiB/s): min= 4096, max= 4096, per=18.98%, avg=4096.00, stdev= 0.00, samples=1 00:34:49.947 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:49.947 lat (usec) : 250=95.33%, 500=0.37% 00:34:49.947 lat (msec) : 50=4.30% 00:34:49.947 cpu : usr=0.29%, sys=0.96%, ctx=535, majf=0, minf=2 00:34:49.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.947 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:49.947 job1: (groupid=0, jobs=1): err= 0: pid=4160396: Tue Nov 19 11:02:39 2024 00:34:49.947 read: IOPS=1535, BW=6143KiB/s (6291kB/s)(6180KiB/1006msec) 00:34:49.947 slat (nsec): min=7687, max=46697, avg=9728.57, stdev=2745.60 00:34:49.947 clat (usec): min=186, max=41145, avg=412.84, stdev=2735.12 00:34:49.947 lat (usec): min=194, max=41155, avg=422.57, stdev=2735.47 00:34:49.947 clat percentiles (usec): 00:34:49.947 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 219], 00:34:49.947 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 231], 00:34:49.947 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 249], 00:34:49.947 | 99.00th=[ 277], 99.50th=[ 416], 99.90th=[41157], 99.95th=[41157], 00:34:49.947 | 99.99th=[41157] 00:34:49.947 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:34:49.947 slat (nsec): min=8978, max=44678, avg=12074.22, stdev=2469.68 00:34:49.947 clat (usec): min=131, max=411, avg=155.16, stdev=16.20 00:34:49.947 lat (usec): min=142, max=455, avg=167.23, stdev=16.69 00:34:49.947 clat percentiles (usec): 00:34:49.947 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:34:49.947 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:34:49.947 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 178], 00:34:49.947 | 99.00th=[ 206], 99.50th=[ 247], 99.90th=[ 330], 99.95th=[ 343], 00:34:49.947 | 99.99th=[ 412] 00:34:49.947 bw ( KiB/s): min= 8192, max= 8192, per=37.96%, avg=8192.00, stdev= 0.00, samples=2 00:34:49.947 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:34:49.947 lat (usec) : 250=98.05%, 500=1.75% 00:34:49.947 lat (msec) : 50=0.19% 00:34:49.947 cpu : usr=2.49%, sys=3.58%, ctx=3594, majf=0, minf=1 00:34:49.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.947 issued rwts: total=1545,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:49.947 job2: (groupid=0, jobs=1): err= 0: pid=4160403: Tue Nov 19 11:02:39 2024 00:34:49.947 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:34:49.947 slat (nsec): min=7336, max=36487, avg=8453.39, stdev=1224.36 00:34:49.947 clat (usec): min=202, max=814, avg=247.18, stdev=24.66 00:34:49.947 lat (usec): min=211, max=823, avg=255.64, stdev=24.66 00:34:49.947 clat percentiles (usec): 00:34:49.947 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 235], 00:34:49.947 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:34:49.947 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:34:49.947 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 570], 99.95th=[ 693], 00:34:49.947 | 99.99th=[ 816] 00:34:49.947 write: IOPS=2525, BW=9.86MiB/s (10.3MB/s)(9.88MiB/1001msec); 0 zone resets 00:34:49.947 slat (nsec): min=10763, max=47251, avg=12106.49, stdev=1838.82 00:34:49.947 clat (usec): min=141, max=460, avg=170.55, stdev=16.51 00:34:49.947 lat (usec): min=153, max=474, avg=182.66, stdev=16.88 00:34:49.947 clat percentiles (usec): 00:34:49.947 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:34:49.947 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:34:49.947 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:34:49.947 | 99.00th=[ 215], 99.50th=[ 227], 99.90th=[ 375], 99.95th=[ 375], 00:34:49.947 | 99.99th=[ 461] 00:34:49.947 bw ( KiB/s): min= 9896, max= 9896, per=45.86%, avg=9896.00, stdev= 0.00, samples=1 00:34:49.947 iops : min= 2474, max= 2474, avg=2474.00, stdev= 0.00, samples=1 00:34:49.947 lat (usec) : 250=81.29%, 500=18.62%, 750=0.07%, 1000=0.02% 00:34:49.947 cpu : usr=2.40%, sys=8.80%, ctx=4577, majf=0, minf=1 00:34:49.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.947 issued rwts: total=2048,2528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:49.947 job3: (groupid=0, jobs=1): err= 0: pid=4160409: Tue Nov 19 11:02:39 2024 00:34:49.947 read: IOPS=216, BW=865KiB/s (886kB/s)(880KiB/1017msec) 00:34:49.947 slat (nsec): min=6916, max=26343, avg=9067.51, stdev=4642.09 00:34:49.947 clat (usec): min=214, max=41554, avg=4129.25, stdev=12013.39 00:34:49.947 lat (usec): min=221, max=41567, avg=4138.31, stdev=12017.66 00:34:49.947 clat percentiles (usec): 00:34:49.947 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 223], 20.00th=[ 229], 00:34:49.947 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 235], 60.00th=[ 239], 00:34:49.947 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 277], 95.00th=[41157], 00:34:49.947 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:49.947 | 99.99th=[41681] 00:34:49.947 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:34:49.947 slat (nsec): min=9543, max=42511, avg=10819.82, stdev=1794.24 00:34:49.947 clat (usec): min=141, max=446, avg=191.39, stdev=20.26 00:34:49.948 lat (usec): min=151, max=457, avg=202.21, stdev=20.71 00:34:49.948 clat percentiles (usec): 00:34:49.948 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 180], 00:34:49.948 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 196], 00:34:49.948 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 219], 00:34:49.948 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 449], 99.95th=[ 449], 00:34:49.948 | 99.99th=[ 449] 00:34:49.948 bw ( KiB/s): min= 4096, max= 4096, per=18.98%, avg=4096.00, stdev= 0.00, samples=1 00:34:49.948 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:49.948 lat (usec) : 250=94.95%, 500=2.19% 00:34:49.948 lat (msec) : 50=2.87% 00:34:49.948 cpu : usr=0.49%, sys=0.49%, ctx=734, majf=0, minf=1 00:34:49.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.948 issued rwts: total=220,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:49.948 00:34:49.948 Run status group 0 (all jobs): 00:34:49.948 READ: bw=14.4MiB/s (15.1MB/s), 88.6KiB/s-8184KiB/s (90.8kB/s-8380kB/s), io=15.0MiB (15.7MB), run=1001-1038msec 00:34:49.948 WRITE: bw=21.1MiB/s (22.1MB/s), 1973KiB/s-9.86MiB/s (2020kB/s-10.3MB/s), io=21.9MiB (22.9MB), run=1001-1038msec 00:34:49.948 00:34:49.948 Disk stats (read/write): 00:34:49.948 nvme0n1: ios=68/512, merge=0/0, ticks=758/80, in_queue=838, util=86.67% 00:34:49.948 nvme0n2: ios=1586/1888, merge=0/0, ticks=921/293, in_queue=1214, util=94.12% 00:34:49.948 nvme0n3: ios=1826/2048, merge=0/0, ticks=1259/332, in_queue=1591, util=99.27% 00:34:49.948 nvme0n4: ios=274/512, merge=0/0, ticks=1556/98, in_queue=1654, util=98.11% 00:34:49.948 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:49.948 [global] 00:34:49.948 thread=1 00:34:49.948 invalidate=1 00:34:49.948 rw=write 00:34:49.948 time_based=1 00:34:49.948 runtime=1 00:34:49.948 ioengine=libaio 00:34:49.948 direct=1 00:34:49.948 bs=4096 00:34:49.948 iodepth=128 00:34:49.948 norandommap=0 00:34:49.948 numjobs=1 00:34:49.948 00:34:49.948 verify_dump=1 00:34:49.948 verify_backlog=512 00:34:49.948 verify_state_save=0 00:34:49.948 do_verify=1 00:34:49.948 verify=crc32c-intel 00:34:49.948 [job0] 00:34:49.948 filename=/dev/nvme0n1 00:34:49.948 [job1] 00:34:49.948 filename=/dev/nvme0n2 00:34:49.948 [job2] 00:34:49.948 filename=/dev/nvme0n3 00:34:49.948 [job3] 00:34:49.948 filename=/dev/nvme0n4 00:34:49.948 Could not set queue depth (nvme0n1) 00:34:49.948 Could not set queue depth (nvme0n2) 00:34:49.948 Could not set queue depth (nvme0n3) 00:34:49.948 Could not set queue depth (nvme0n4) 00:34:50.206 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:50.206 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:50.206 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:50.206 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:50.206 fio-3.35 00:34:50.206 Starting 4 threads 00:34:51.585 00:34:51.585 job0: (groupid=0, jobs=1): err= 0: pid=4160802: Tue Nov 19 11:02:41 2024 00:34:51.585 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:34:51.585 slat (nsec): min=1577, max=11791k, avg=92078.62, stdev=633631.86 00:34:51.585 clat (usec): min=6264, max=40989, avg=12402.35, stdev=5199.98 00:34:51.585 lat (usec): min=6273, max=41003, avg=12494.43, stdev=5243.86 00:34:51.585 clat percentiles (usec): 00:34:51.585 | 1.00th=[ 7177], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9241], 00:34:51.585 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:34:51.585 | 70.00th=[12387], 80.00th=[13173], 90.00th=[18220], 95.00th=[25822], 00:34:51.585 | 99.00th=[32637], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:34:51.585 | 99.99th=[41157] 00:34:51.585 write: IOPS=5333, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1003msec); 0 zone resets 00:34:51.585 slat (usec): min=2, max=23081, avg=92.32, stdev=656.30 00:34:51.585 clat (usec): min=511, max=48259, avg=11861.16, stdev=4814.20 00:34:51.585 lat (usec): min=4912, max=48269, avg=11953.48, stdev=4872.25 00:34:51.585 clat percentiles (usec): 00:34:51.585 | 1.00th=[ 6259], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[ 9896], 00:34:51.585 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:34:51.585 | 70.00th=[10945], 80.00th=[12125], 90.00th=[17695], 95.00th=[20055], 00:34:51.585 | 99.00th=[36439], 99.50th=[47973], 99.90th=[47973], 99.95th=[48497], 00:34:51.585 | 99.99th=[48497] 00:34:51.585 bw ( KiB/s): min=19320, max=22456, per=28.08%, avg=20888.00, stdev=2217.49, samples=2 00:34:51.585 iops : min= 4830, max= 5614, avg=5222.00, stdev=554.37, samples=2 00:34:51.585 lat (usec) : 750=0.01% 00:34:51.585 lat (msec) : 10=27.98%, 20=64.33%, 50=7.69% 00:34:51.585 cpu : usr=3.59%, sys=8.58%, ctx=374, majf=0, minf=1 00:34:51.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:51.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:51.585 issued rwts: total=5120,5350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:51.585 job1: (groupid=0, jobs=1): err= 0: pid=4160823: Tue Nov 19 11:02:41 2024 00:34:51.585 read: IOPS=5829, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1008msec) 00:34:51.585 slat (nsec): min=1289, max=10249k, avg=83761.58, stdev=678170.40 00:34:51.585 clat (usec): min=2296, max=21284, avg=10841.42, stdev=2969.06 00:34:51.585 lat (usec): min=2883, max=21287, avg=10925.18, stdev=3010.35 00:34:51.585 clat percentiles (usec): 00:34:51.585 | 1.00th=[ 5276], 5.00th=[ 7439], 10.00th=[ 8225], 20.00th=[ 8848], 00:34:51.585 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10290], 00:34:51.585 | 70.00th=[11207], 80.00th=[13304], 90.00th=[15664], 95.00th=[16909], 00:34:51.585 | 99.00th=[18482], 99.50th=[19006], 99.90th=[20317], 99.95th=[21365], 00:34:51.585 | 99.99th=[21365] 00:34:51.585 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:34:51.585 slat (usec): min=2, max=28987, avg=76.01, stdev=598.22 00:34:51.585 clat (usec): min=1773, max=42840, avg=9802.54, stdev=2532.08 00:34:51.585 lat (usec): min=1794, max=42853, avg=9878.56, stdev=2590.30 00:34:51.585 clat percentiles (usec): 00:34:51.585 | 1.00th=[ 2999], 5.00th=[ 5211], 10.00th=[ 6390], 20.00th=[ 8029], 00:34:51.585 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:34:51.585 | 70.00th=[10421], 80.00th=[11338], 90.00th=[12256], 95.00th=[14222], 00:34:51.585 | 99.00th=[16057], 99.50th=[16057], 99.90th=[19006], 99.95th=[21365], 00:34:51.585 | 99.99th=[42730] 00:34:51.585 bw ( KiB/s): min=24576, max=24576, per=33.04%, avg=24576.00, stdev= 0.00, samples=2 00:34:51.585 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:34:51.585 lat (msec) : 2=0.22%, 4=1.61%, 10=46.64%, 20=51.36%, 50=0.17% 00:34:51.585 cpu : usr=4.97%, sys=6.36%, ctx=585, majf=0, minf=1 00:34:51.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:51.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:51.585 issued rwts: total=5876,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:51.585 job2: (groupid=0, jobs=1): err= 0: pid=4160852: Tue Nov 19 11:02:41 2024 00:34:51.585 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:34:51.585 slat (nsec): min=1075, max=13802k, avg=127679.76, stdev=859626.37 00:34:51.585 clat (usec): min=519, max=38755, avg=15463.82, stdev=6156.07 00:34:51.585 lat (usec): min=528, max=38764, avg=15591.50, stdev=6221.21 00:34:51.585 clat percentiles (usec): 00:34:51.585 | 1.00th=[ 1876], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[11863], 00:34:51.585 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13173], 60.00th=[14615], 00:34:51.585 | 70.00th=[15795], 80.00th=[20055], 90.00th=[25297], 95.00th=[28181], 00:34:51.585 | 99.00th=[34866], 99.50th=[35914], 99.90th=[38536], 99.95th=[38536], 00:34:51.585 | 99.99th=[38536] 00:34:51.585 write: IOPS=3712, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1006msec); 0 zone resets 00:34:51.585 slat (nsec): min=1879, max=10594k, avg=135479.69, stdev=647895.62 00:34:51.585 clat (usec): min=2975, max=39349, avg=19203.68, stdev=7133.03 00:34:51.585 lat (usec): min=2989, max=39357, avg=19339.16, stdev=7187.95 00:34:51.585 clat percentiles (usec): 00:34:51.585 | 1.00th=[ 4621], 5.00th=[ 8225], 10.00th=[11207], 20.00th=[11994], 00:34:51.585 | 30.00th=[13698], 40.00th=[16188], 50.00th=[20055], 60.00th=[22414], 00:34:51.585 | 70.00th=[22938], 80.00th=[25560], 90.00th=[28705], 95.00th=[31065], 00:34:51.586 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[38536], 00:34:51.586 | 99.99th=[39584] 00:34:51.586 bw ( KiB/s): min=13072, max=15792, per=19.40%, avg=14432.00, stdev=1923.33, samples=2 00:34:51.586 iops : min= 3268, max= 3948, avg=3608.00, stdev=480.83, samples=2 00:34:51.586 lat (usec) : 750=0.01% 00:34:51.586 lat (msec) : 2=0.66%, 4=0.49%, 10=6.46%, 20=57.23%, 50=35.14% 00:34:51.586 cpu : usr=2.69%, sys=3.88%, ctx=386, majf=0, minf=1 00:34:51.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:34:51.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:51.586 issued rwts: total=3584,3735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:51.586 job3: (groupid=0, jobs=1): err= 0: pid=4160862: Tue Nov 19 11:02:41 2024 00:34:51.586 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:34:51.586 slat (nsec): min=1718, max=13915k, avg=147886.72, stdev=875692.34 00:34:51.586 clat (usec): min=9127, max=60471, avg=19660.74, stdev=10268.21 00:34:51.586 lat (usec): min=9130, max=61833, avg=19808.63, stdev=10297.34 00:34:51.586 clat percentiles (usec): 00:34:51.586 | 1.00th=[ 9765], 5.00th=[11076], 10.00th=[11863], 20.00th=[12649], 00:34:51.586 | 30.00th=[13566], 40.00th=[14615], 50.00th=[17171], 60.00th=[18220], 00:34:51.586 | 70.00th=[19530], 80.00th=[21365], 90.00th=[35390], 95.00th=[44303], 00:34:51.586 | 99.00th=[56886], 99.50th=[57934], 99.90th=[60556], 99.95th=[60556], 00:34:51.586 | 99.99th=[60556] 00:34:51.586 write: IOPS=3510, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1002msec); 0 zone resets 00:34:51.586 slat (usec): min=2, max=10177, avg=148.85, stdev=695.94 00:34:51.586 clat (usec): min=374, max=61388, avg=18755.63, stdev=10948.41 00:34:51.586 lat (usec): min=3494, max=61401, avg=18904.48, stdev=11008.66 00:34:51.586 clat percentiles (usec): 00:34:51.586 | 1.00th=[ 6783], 5.00th=[ 9765], 10.00th=[11207], 20.00th=[11600], 00:34:51.586 | 30.00th=[12125], 40.00th=[13435], 50.00th=[14222], 60.00th=[17171], 00:34:51.586 | 70.00th=[21627], 80.00th=[22676], 90.00th=[32637], 95.00th=[42730], 00:34:51.586 | 99.00th=[61080], 99.50th=[61080], 99.90th=[61604], 99.95th=[61604], 00:34:51.586 | 99.99th=[61604] 00:34:51.586 bw ( KiB/s): min=11568, max=15552, per=18.23%, avg=13560.00, stdev=2817.11, samples=2 00:34:51.586 iops : min= 2892, max= 3888, avg=3390.00, stdev=704.28, samples=2 00:34:51.586 lat (usec) : 500=0.02% 00:34:51.586 lat (msec) : 4=0.49%, 10=2.96%, 20=66.28%, 50=26.66%, 100=3.60% 00:34:51.586 cpu : usr=3.20%, sys=3.90%, ctx=390, majf=0, minf=1 00:34:51.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:34:51.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:51.586 issued rwts: total=3072,3518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:51.586 00:34:51.586 Run status group 0 (all jobs): 00:34:51.586 READ: bw=68.4MiB/s (71.7MB/s), 12.0MiB/s-22.8MiB/s (12.6MB/s-23.9MB/s), io=69.0MiB (72.3MB), run=1002-1008msec 00:34:51.586 WRITE: bw=72.6MiB/s (76.2MB/s), 13.7MiB/s-23.8MiB/s (14.4MB/s-25.0MB/s), io=73.2MiB (76.8MB), run=1002-1008msec 00:34:51.586 00:34:51.586 Disk stats (read/write): 00:34:51.586 nvme0n1: ios=4145/4262, merge=0/0, ticks=25561/23411, in_queue=48972, util=82.26% 00:34:51.586 nvme0n2: ios=4631/5119, merge=0/0, ticks=47621/47317, in_queue=94938, util=98.97% 00:34:51.586 nvme0n3: ios=2560/2943, merge=0/0, ticks=34828/49919, in_queue=84747, util=87.68% 00:34:51.586 nvme0n4: ios=2549/2560, merge=0/0, ticks=13530/12858, in_queue=26388, util=98.90% 00:34:51.586 11:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:51.586 [global] 00:34:51.586 thread=1 00:34:51.586 invalidate=1 00:34:51.586 rw=randwrite 00:34:51.586 time_based=1 00:34:51.586 runtime=1 00:34:51.586 ioengine=libaio 00:34:51.586 direct=1 00:34:51.586 bs=4096 00:34:51.586 iodepth=128 00:34:51.586 norandommap=0 00:34:51.586 numjobs=1 00:34:51.586 00:34:51.586 verify_dump=1 00:34:51.586 verify_backlog=512 00:34:51.586 verify_state_save=0 00:34:51.586 do_verify=1 00:34:51.586 verify=crc32c-intel 00:34:51.586 [job0] 00:34:51.586 filename=/dev/nvme0n1 00:34:51.586 [job1] 00:34:51.586 filename=/dev/nvme0n2 00:34:51.586 [job2] 00:34:51.586 filename=/dev/nvme0n3 00:34:51.586 [job3] 00:34:51.586 filename=/dev/nvme0n4 00:34:51.586 Could not set queue depth (nvme0n1) 00:34:51.586 Could not set queue depth (nvme0n2) 00:34:51.586 Could not set queue depth (nvme0n3) 00:34:51.586 Could not set queue depth (nvme0n4) 00:34:51.845 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:51.845 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:51.845 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:51.845 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:51.845 fio-3.35 00:34:51.845 Starting 4 threads 00:34:53.233 00:34:53.233 job0: (groupid=0, jobs=1): err= 0: pid=4161248: Tue Nov 19 11:02:42 2024 00:34:53.233 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec) 00:34:53.233 slat (nsec): min=1299, max=20862k, avg=139541.66, stdev=1120576.87 00:34:53.233 clat (usec): min=6843, max=42055, avg=17718.87, stdev=6564.77 00:34:53.233 lat (usec): min=6854, max=51489, avg=17858.41, stdev=6654.80 00:34:53.233 clat percentiles (usec): 00:34:53.233 | 1.00th=[ 8979], 5.00th=[10290], 10.00th=[10552], 20.00th=[11076], 00:34:53.233 | 30.00th=[12780], 40.00th=[15533], 50.00th=[16909], 60.00th=[17957], 00:34:53.233 | 70.00th=[20579], 80.00th=[23725], 90.00th=[27395], 95.00th=[28705], 00:34:53.233 | 99.00th=[38011], 99.50th=[38011], 99.90th=[40633], 99.95th=[41157], 00:34:53.233 | 99.99th=[42206] 00:34:53.233 write: IOPS=3698, BW=14.4MiB/s (15.1MB/s)(14.6MiB/1014msec); 0 zone resets 00:34:53.233 slat (usec): min=2, max=16013, avg=127.07, stdev=875.89 00:34:53.233 clat (usec): min=1458, max=59412, avg=17284.35, stdev=11474.71 00:34:53.233 lat (usec): min=1470, max=59433, avg=17411.42, stdev=11544.04 00:34:53.233 clat percentiles (usec): 00:34:53.233 | 1.00th=[ 6456], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[ 9634], 00:34:53.233 | 30.00th=[10552], 40.00th=[11731], 50.00th=[12125], 60.00th=[14091], 00:34:53.233 | 70.00th=[17695], 80.00th=[23462], 90.00th=[31065], 95.00th=[45876], 00:34:53.233 | 99.00th=[57410], 99.50th=[58459], 99.90th=[59507], 99.95th=[59507], 00:34:53.233 | 99.99th=[59507] 00:34:53.233 bw ( KiB/s): min=13352, max=15624, per=22.05%, avg=14488.00, stdev=1606.55, samples=2 00:34:53.233 iops : min= 3338, max= 3906, avg=3622.00, stdev=401.64, samples=2 00:34:53.233 lat (msec) : 2=0.03%, 4=0.16%, 10=14.84%, 20=56.18%, 50=26.92% 00:34:53.233 lat (msec) : 100=1.88% 00:34:53.233 cpu : usr=2.57%, sys=5.13%, ctx=257, majf=0, minf=2 00:34:53.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:34:53.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:53.233 issued rwts: total=3584,3750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:53.233 job1: (groupid=0, jobs=1): err= 0: pid=4161260: Tue Nov 19 11:02:42 2024 00:34:53.233 read: IOPS=3653, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1010msec) 00:34:53.233 slat (nsec): min=1072, max=14618k, avg=94910.16, stdev=812086.53 00:34:53.233 clat (usec): min=1958, max=66921, avg=13685.46, stdev=7888.85 00:34:53.233 lat (usec): min=1982, max=66924, avg=13780.37, stdev=7981.85 00:34:53.233 clat percentiles (usec): 00:34:53.233 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 5276], 20.00th=[ 8717], 00:34:53.233 | 30.00th=[10159], 40.00th=[10552], 50.00th=[12387], 60.00th=[13960], 00:34:53.233 | 70.00th=[16057], 80.00th=[18220], 90.00th=[21890], 95.00th=[27919], 00:34:53.233 | 99.00th=[45876], 99.50th=[49021], 99.90th=[60031], 99.95th=[60031], 00:34:53.233 | 99.99th=[66847] 00:34:53.233 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:34:53.233 slat (nsec): min=1930, max=20062k, avg=97665.40, stdev=782586.02 00:34:53.233 clat (usec): min=220, max=138045, avg=18945.88, stdev=22102.98 00:34:53.233 lat (usec): min=227, max=138053, avg=19043.55, stdev=22206.07 00:34:53.233 clat percentiles (usec): 00:34:53.233 | 1.00th=[ 1045], 5.00th=[ 3195], 10.00th=[ 4883], 20.00th=[ 7635], 00:34:53.233 | 30.00th=[ 8717], 40.00th=[ 10159], 50.00th=[ 11076], 60.00th=[ 14484], 00:34:53.233 | 70.00th=[ 18482], 80.00th=[ 21627], 90.00th=[ 44303], 95.00th=[ 67634], 00:34:53.233 | 99.00th=[124257], 99.50th=[128451], 99.90th=[137364], 99.95th=[137364], 00:34:53.233 | 99.99th=[137364] 00:34:53.233 bw ( KiB/s): min=15776, max=16816, per=24.80%, avg=16296.00, stdev=735.39, samples=2 00:34:53.233 iops : min= 3944, max= 4204, avg=4074.00, stdev=183.85, samples=2 00:34:53.233 lat (usec) : 250=0.03%, 500=0.05%, 750=0.01%, 1000=0.31% 00:34:53.233 lat (msec) : 2=1.01%, 4=7.21%, 10=25.37%, 20=45.76%, 50=15.34% 00:34:53.233 lat (msec) : 100=3.81%, 250=1.10% 00:34:53.233 cpu : usr=2.38%, sys=4.26%, ctx=417, majf=0, minf=1 00:34:53.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:53.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:53.233 issued rwts: total=3690,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:53.233 job2: (groupid=0, jobs=1): err= 0: pid=4161274: Tue Nov 19 11:02:42 2024 00:34:53.234 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:34:53.234 slat (nsec): min=1576, max=48512k, avg=89498.77, stdev=861100.58 00:34:53.234 clat (usec): min=5666, max=60357, avg=11480.00, stdev=6448.87 00:34:53.234 lat (usec): min=5674, max=60374, avg=11569.50, stdev=6489.72 00:34:53.234 clat percentiles (usec): 00:34:53.234 | 1.00th=[ 6718], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 8717], 00:34:53.234 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10814], 00:34:53.234 | 70.00th=[11731], 80.00th=[12649], 90.00th=[14746], 95.00th=[16581], 00:34:53.234 | 99.00th=[53740], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:34:53.234 | 99.99th=[60556] 00:34:53.234 write: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(23.8MiB/1001msec); 0 zone resets 00:34:53.234 slat (usec): min=2, max=9260, avg=73.63, stdev=421.90 00:34:53.234 clat (usec): min=426, max=56565, avg=10194.80, stdev=3813.72 00:34:53.234 lat (usec): min=1404, max=56573, avg=10268.43, stdev=3825.98 00:34:53.234 clat percentiles (usec): 00:34:53.234 | 1.00th=[ 4047], 5.00th=[ 5473], 10.00th=[ 7504], 20.00th=[ 8848], 00:34:53.234 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[10159], 00:34:53.234 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12780], 95.00th=[14091], 00:34:53.234 | 99.00th=[17695], 99.50th=[25035], 99.90th=[56361], 99.95th=[56361], 00:34:53.234 | 99.99th=[56361] 00:34:53.234 bw ( KiB/s): min=24056, max=24056, per=36.61%, avg=24056.00, stdev= 0.00, samples=1 00:34:53.234 iops : min= 6014, max= 6014, avg=6014.00, stdev= 0.00, samples=1 00:34:53.234 lat (usec) : 500=0.01% 00:34:53.234 lat (msec) : 2=0.16%, 4=0.26%, 10=52.93%, 20=44.94%, 50=0.61% 00:34:53.234 lat (msec) : 100=1.08% 00:34:53.234 cpu : usr=4.80%, sys=6.90%, ctx=515, majf=0, minf=1 00:34:53.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:53.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:53.234 issued rwts: total=5632,6090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:53.234 job3: (groupid=0, jobs=1): err= 0: pid=4161279: Tue Nov 19 11:02:42 2024 00:34:53.234 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:34:53.234 slat (nsec): min=1454, max=26751k, avg=208596.54, stdev=1609600.70 00:34:53.234 clat (usec): min=4173, max=98354, avg=25077.85, stdev=17194.50 00:34:53.234 lat (usec): min=4179, max=98365, avg=25286.45, stdev=17353.62 00:34:53.234 clat percentiles (usec): 00:34:53.234 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[10945], 20.00th=[12518], 00:34:53.234 | 30.00th=[16581], 40.00th=[17695], 50.00th=[19530], 60.00th=[21890], 00:34:53.234 | 70.00th=[28443], 80.00th=[33162], 90.00th=[42730], 95.00th=[67634], 00:34:53.234 | 99.00th=[95945], 99.50th=[95945], 99.90th=[98042], 99.95th=[98042], 00:34:53.234 | 99.99th=[98042] 00:34:53.234 write: IOPS=2708, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1005msec); 0 zone resets 00:34:53.234 slat (usec): min=2, max=23488, avg=153.61, stdev=1282.59 00:34:53.234 clat (usec): min=448, max=98530, avg=23082.98, stdev=21624.67 00:34:53.234 lat (usec): min=504, max=98547, avg=23236.59, stdev=21722.83 00:34:53.234 clat percentiles (usec): 00:34:53.234 | 1.00th=[ 4490], 5.00th=[ 7177], 10.00th=[ 9110], 20.00th=[10945], 00:34:53.234 | 30.00th=[11469], 40.00th=[13435], 50.00th=[14746], 60.00th=[16909], 00:34:53.234 | 70.00th=[21365], 80.00th=[27395], 90.00th=[65799], 95.00th=[79168], 00:34:53.234 | 99.00th=[98042], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:34:53.234 | 99.99th=[98042] 00:34:53.234 bw ( KiB/s): min= 6640, max=14112, per=15.79%, avg=10376.00, stdev=5283.50, samples=2 00:34:53.234 iops : min= 1660, max= 3528, avg=2594.00, stdev=1320.88, samples=2 00:34:53.234 lat (usec) : 500=0.02% 00:34:53.234 lat (msec) : 2=0.02%, 10=9.28%, 20=50.57%, 50=30.44%, 100=9.67% 00:34:53.234 cpu : usr=3.49%, sys=1.99%, ctx=172, majf=0, minf=1 00:34:53.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:34:53.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:53.234 issued rwts: total=2560,2722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:53.234 00:34:53.234 Run status group 0 (all jobs): 00:34:53.234 READ: bw=59.6MiB/s (62.5MB/s), 9.95MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=60.4MiB (63.3MB), run=1001-1014msec 00:34:53.234 WRITE: bw=64.2MiB/s (67.3MB/s), 10.6MiB/s-23.8MiB/s (11.1MB/s-24.9MB/s), io=65.1MiB (68.2MB), run=1001-1014msec 00:34:53.234 00:34:53.234 Disk stats (read/write): 00:34:53.234 nvme0n1: ios=3088/3247, merge=0/0, ticks=55997/48014, in_queue=104011, util=99.80% 00:34:53.234 nvme0n2: ios=3303/3618, merge=0/0, ticks=42328/63257, in_queue=105585, util=91.17% 00:34:53.234 nvme0n3: ios=5132/5127, merge=0/0, ticks=34289/31504, in_queue=65793, util=93.87% 00:34:53.234 nvme0n4: ios=2106/2243, merge=0/0, ticks=34230/27919, in_queue=62149, util=97.70% 00:34:53.234 11:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:53.234 11:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4161370 00:34:53.234 11:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:53.234 11:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:53.234 [global] 00:34:53.234 thread=1 00:34:53.234 invalidate=1 00:34:53.234 rw=read 00:34:53.234 time_based=1 00:34:53.234 runtime=10 00:34:53.234 ioengine=libaio 00:34:53.234 direct=1 00:34:53.234 bs=4096 00:34:53.234 iodepth=1 00:34:53.234 norandommap=1 00:34:53.234 numjobs=1 00:34:53.234 00:34:53.234 [job0] 00:34:53.234 filename=/dev/nvme0n1 00:34:53.234 [job1] 00:34:53.234 filename=/dev/nvme0n2 00:34:53.234 [job2] 00:34:53.234 filename=/dev/nvme0n3 00:34:53.234 [job3] 00:34:53.234 filename=/dev/nvme0n4 00:34:53.234 Could not set queue depth (nvme0n1) 00:34:53.234 Could not set queue depth (nvme0n2) 00:34:53.234 Could not set queue depth (nvme0n3) 00:34:53.234 Could not set queue depth (nvme0n4) 00:34:53.492 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:53.493 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:53.493 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:53.493 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:53.493 fio-3.35 00:34:53.493 Starting 4 threads 00:34:56.020 11:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:56.279 11:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:56.279 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:34:56.279 fio: pid=4161720, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:56.538 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=42311680, buflen=4096 00:34:56.538 fio: pid=4161718, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:56.538 11:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:56.538 11:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:56.796 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=44339200, buflen=4096 00:34:56.796 fio: pid=4161672, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:56.796 11:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:56.796 11:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:56.796 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=380928, buflen=4096 00:34:56.796 fio: pid=4161691, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:56.796 11:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:56.796 11:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:56.796 00:34:56.796 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4161672: Tue Nov 19 11:02:46 2024 00:34:56.796 read: IOPS=3492, BW=13.6MiB/s (14.3MB/s)(42.3MiB/3100msec) 00:34:56.796 slat (usec): min=6, max=12819, avg= 9.53, stdev=123.13 00:34:56.796 clat (usec): min=189, max=41248, avg=273.35, stdev=1464.54 00:34:56.796 lat (usec): min=197, max=54068, avg=282.88, stdev=1502.79 00:34:56.796 clat percentiles (usec): 00:34:56.796 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:34:56.796 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 223], 00:34:56.796 | 70.00th=[ 225], 80.00th=[ 227], 90.00th=[ 231], 95.00th=[ 235], 00:34:56.796 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[41157], 99.95th=[41157], 00:34:56.796 | 99.99th=[41157] 00:34:56.796 bw ( KiB/s): min= 100, max=17552, per=55.61%, avg=14430.00, stdev=7023.19, samples=6 00:34:56.796 iops : min= 25, max= 4388, avg=3607.50, stdev=1755.80, samples=6 00:34:56.796 lat (usec) : 250=99.30%, 500=0.53%, 750=0.04% 00:34:56.796 lat (msec) : 50=0.13% 00:34:56.796 cpu : usr=0.87%, sys=3.74%, ctx=10829, majf=0, minf=1 00:34:56.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.796 issued rwts: total=10826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:56.796 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4161691: Tue Nov 19 11:02:46 2024 00:34:56.796 read: IOPS=28, BW=113KiB/s (116kB/s)(372KiB/3286msec) 00:34:56.796 slat (usec): min=4, max=12852, avg=202.17, stdev=1408.59 00:34:56.796 clat (usec): min=238, max=42059, avg=34886.13, stdev=14613.96 00:34:56.796 lat (usec): min=249, max=54043, avg=35090.26, stdev=14764.47 00:34:56.796 clat percentiles (usec): 00:34:56.796 | 1.00th=[ 239], 5.00th=[ 297], 10.00th=[ 367], 20.00th=[40633], 00:34:56.796 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:56.796 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:56.796 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:56.796 | 99.99th=[42206] 00:34:56.796 bw ( KiB/s): min= 96, max= 144, per=0.43%, avg=112.17, stdev=20.40, samples=6 00:34:56.796 iops : min= 24, max= 36, avg=28.00, stdev= 5.06, samples=6 00:34:56.796 lat (usec) : 250=1.06%, 500=11.70%, 750=2.13% 00:34:56.796 lat (msec) : 50=84.04% 00:34:56.796 cpu : usr=0.00%, sys=0.06%, ctx=96, majf=0, minf=2 00:34:56.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.796 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.796 issued rwts: total=94,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:56.796 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4161718: Tue Nov 19 11:02:46 2024 00:34:56.796 read: IOPS=3594, BW=14.0MiB/s (14.7MB/s)(40.4MiB/2874msec) 00:34:56.796 slat (usec): min=6, max=15653, avg=11.27, stdev=194.80 00:34:56.796 clat (usec): min=186, max=1925, avg=264.18, stdev=54.23 00:34:56.796 lat (usec): min=198, max=16059, avg=275.45, stdev=204.05 00:34:56.796 clat percentiles (usec): 00:34:56.796 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 223], 00:34:56.796 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 251], 60.00th=[ 293], 00:34:56.796 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:34:56.796 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 490], 99.95th=[ 1467], 00:34:56.796 | 99.99th=[ 1795] 00:34:56.796 bw ( KiB/s): min=12672, max=17112, per=55.77%, avg=14470.40, stdev=2259.64, samples=5 00:34:56.796 iops : min= 3168, max= 4278, avg=3617.60, stdev=564.91, samples=5 00:34:56.796 lat (usec) : 250=49.53%, 500=50.36%, 750=0.02% 00:34:56.796 lat (msec) : 2=0.08% 00:34:56.796 cpu : usr=1.60%, sys=6.02%, ctx=10334, majf=0, minf=1 00:34:56.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.796 issued rwts: total=10331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:56.796 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4161720: Tue Nov 19 11:02:46 2024 00:34:56.796 read: IOPS=25, BW=99.6KiB/s (102kB/s)(268KiB/2691msec) 00:34:56.796 slat (nsec): min=17125, max=34413, avg=22799.03, stdev=2090.00 00:34:56.796 clat (usec): min=556, max=42018, avg=39817.86, stdev=6860.09 00:34:56.796 lat (usec): min=591, max=42046, avg=39840.67, stdev=6858.96 00:34:56.796 clat percentiles (usec): 00:34:56.796 | 1.00th=[ 553], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:56.796 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:56.796 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:56.796 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:56.796 | 99.99th=[42206] 00:34:56.796 bw ( KiB/s): min= 96, max= 104, per=0.38%, avg=99.20, stdev= 4.38, samples=5 00:34:56.796 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:34:56.796 lat (usec) : 750=1.47% 00:34:56.796 lat (msec) : 2=1.47%, 50=95.59% 00:34:56.796 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=2 00:34:56.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.796 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.796 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:56.796 00:34:56.796 Run status group 0 (all jobs): 00:34:56.796 READ: bw=25.3MiB/s (26.6MB/s), 99.6KiB/s-14.0MiB/s (102kB/s-14.7MB/s), io=83.3MiB (87.3MB), run=2691-3286msec 00:34:56.796 00:34:56.796 Disk stats (read/write): 00:34:56.796 nvme0n1: ios=10824/0, merge=0/0, ticks=2872/0, in_queue=2872, util=93.96% 00:34:56.796 nvme0n2: ios=86/0, merge=0/0, ticks=3039/0, in_queue=3039, util=94.54% 00:34:56.796 nvme0n3: ios=10120/0, merge=0/0, ticks=2856/0, in_queue=2856, util=98.84% 00:34:56.796 nvme0n4: ios=64/0, merge=0/0, ticks=2547/0, in_queue=2547, util=96.39% 00:34:57.054 11:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:57.054 11:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:57.311 11:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:57.311 11:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:57.569 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:57.569 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:57.827 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:57.827 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:57.827 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:57.827 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 4161370 00:34:57.827 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:57.827 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:58.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:58.086 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:58.086 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:58.086 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:58.086 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:58.086 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:58.086 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:58.086 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:58.086 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:58.086 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:58.086 nvmf hotplug test: fio failed as expected 00:34:58.086 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:58.345 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:58.346 rmmod nvme_tcp 00:34:58.346 rmmod nvme_fabrics 00:34:58.346 rmmod nvme_keyring 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4158898 ']' 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4158898 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 4158898 ']' 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 4158898 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:58.346 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:58.346 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4158898 00:34:58.346 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:58.346 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:58.346 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4158898' 00:34:58.346 killing process with pid 4158898 00:34:58.346 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 4158898 00:34:58.346 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 4158898 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:58.605 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.510 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:00.510 00:35:00.510 real 0m25.790s 00:35:00.510 user 1m31.185s 00:35:00.510 sys 0m11.310s 00:35:00.510 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:00.510 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:00.510 ************************************ 00:35:00.510 END TEST nvmf_fio_target 00:35:00.510 ************************************ 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:00.770 ************************************ 00:35:00.770 START TEST nvmf_bdevio 00:35:00.770 ************************************ 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:00.770 * Looking for test storage... 00:35:00.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.770 --rc genhtml_branch_coverage=1 00:35:00.770 --rc genhtml_function_coverage=1 00:35:00.770 --rc genhtml_legend=1 00:35:00.770 --rc geninfo_all_blocks=1 00:35:00.770 --rc geninfo_unexecuted_blocks=1 00:35:00.770 00:35:00.770 ' 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.770 --rc genhtml_branch_coverage=1 00:35:00.770 --rc genhtml_function_coverage=1 00:35:00.770 --rc genhtml_legend=1 00:35:00.770 --rc geninfo_all_blocks=1 00:35:00.770 --rc geninfo_unexecuted_blocks=1 00:35:00.770 00:35:00.770 ' 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.770 --rc genhtml_branch_coverage=1 00:35:00.770 --rc genhtml_function_coverage=1 00:35:00.770 --rc genhtml_legend=1 00:35:00.770 --rc geninfo_all_blocks=1 00:35:00.770 --rc geninfo_unexecuted_blocks=1 00:35:00.770 00:35:00.770 ' 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.770 --rc genhtml_branch_coverage=1 00:35:00.770 --rc genhtml_function_coverage=1 00:35:00.770 --rc genhtml_legend=1 00:35:00.770 --rc geninfo_all_blocks=1 00:35:00.770 --rc geninfo_unexecuted_blocks=1 00:35:00.770 00:35:00.770 ' 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.770 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.771 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:01.029 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:01.030 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:07.601 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:07.601 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:07.601 Found net devices under 0000:86:00.0: cvl_0_0 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.601 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:07.602 Found net devices under 0000:86:00.1: cvl_0_1 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:07.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:07.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:35:07.602 00:35:07.602 --- 10.0.0.2 ping statistics --- 00:35:07.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.602 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:07.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:07.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:35:07.602 00:35:07.602 --- 10.0.0.1 ping statistics --- 00:35:07.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.602 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=4165953 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 4165953 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 4165953 ']' 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:07.602 11:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.602 [2024-11-19 11:02:56.545883] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:07.602 [2024-11-19 11:02:56.546811] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:35:07.602 [2024-11-19 11:02:56.546843] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:07.602 [2024-11-19 11:02:56.627176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:07.602 [2024-11-19 11:02:56.667150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:07.602 [2024-11-19 11:02:56.667186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:07.602 [2024-11-19 11:02:56.667193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:07.602 [2024-11-19 11:02:56.667199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:07.602 [2024-11-19 11:02:56.667207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:07.602 [2024-11-19 11:02:56.668833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:07.602 [2024-11-19 11:02:56.668942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:07.602 [2024-11-19 11:02:56.669049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:07.602 [2024-11-19 11:02:56.669051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:07.602 [2024-11-19 11:02:56.735668] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:07.602 [2024-11-19 11:02:56.736786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:07.602 [2024-11-19 11:02:56.736920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:07.602 [2024-11-19 11:02:56.737218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:07.602 [2024-11-19 11:02:56.737280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:07.602 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:07.602 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:07.602 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:07.602 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:07.602 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.920 [2024-11-19 11:02:57.421827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.920 Malloc0 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.920 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.921 [2024-11-19 11:02:57.505980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:07.921 { 00:35:07.921 "params": { 00:35:07.921 "name": "Nvme$subsystem", 00:35:07.921 "trtype": "$TEST_TRANSPORT", 00:35:07.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:07.921 "adrfam": "ipv4", 00:35:07.921 "trsvcid": "$NVMF_PORT", 00:35:07.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:07.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:07.921 "hdgst": ${hdgst:-false}, 00:35:07.921 "ddgst": ${ddgst:-false} 00:35:07.921 }, 00:35:07.921 "method": "bdev_nvme_attach_controller" 00:35:07.921 } 00:35:07.921 EOF 00:35:07.921 )") 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:07.921 11:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:07.921 "params": { 00:35:07.921 "name": "Nvme1", 00:35:07.921 "trtype": "tcp", 00:35:07.921 "traddr": "10.0.0.2", 00:35:07.921 "adrfam": "ipv4", 00:35:07.921 "trsvcid": "4420", 00:35:07.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:07.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:07.921 "hdgst": false, 00:35:07.921 "ddgst": false 00:35:07.921 }, 00:35:07.921 "method": "bdev_nvme_attach_controller" 00:35:07.921 }' 00:35:07.921 [2024-11-19 11:02:57.555504] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:35:07.921 [2024-11-19 11:02:57.555544] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166083 ] 00:35:07.921 [2024-11-19 11:02:57.630677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:07.921 [2024-11-19 11:02:57.678226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.921 [2024-11-19 11:02:57.678335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.921 [2024-11-19 11:02:57.678336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:08.214 I/O targets: 00:35:08.214 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:08.214 00:35:08.214 00:35:08.214 CUnit - A unit testing framework for C - Version 2.1-3 00:35:08.214 http://cunit.sourceforge.net/ 00:35:08.214 00:35:08.214 00:35:08.214 Suite: bdevio tests on: Nvme1n1 00:35:08.214 Test: blockdev write read block ...passed 00:35:08.214 Test: blockdev write zeroes read block ...passed 00:35:08.214 Test: blockdev write zeroes read no split ...passed 00:35:08.214 Test: blockdev write zeroes read split ...passed 00:35:08.214 Test: blockdev write zeroes read split partial ...passed 00:35:08.214 Test: blockdev reset ...[2024-11-19 11:02:57.940138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:08.214 [2024-11-19 11:02:57.940199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54340 (9): Bad file descriptor 00:35:08.214 [2024-11-19 11:02:57.985177] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:08.214 passed 00:35:08.490 Test: blockdev write read 8 blocks ...passed 00:35:08.490 Test: blockdev write read size > 128k ...passed 00:35:08.490 Test: blockdev write read invalid size ...passed 00:35:08.490 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:08.490 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:08.490 Test: blockdev write read max offset ...passed 00:35:08.490 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:08.490 Test: blockdev writev readv 8 blocks ...passed 00:35:08.490 Test: blockdev writev readv 30 x 1block ...passed 00:35:08.490 Test: blockdev writev readv block ...passed 00:35:08.490 Test: blockdev writev readv size > 128k ...passed 00:35:08.490 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:08.490 Test: blockdev comparev and writev ...[2024-11-19 11:02:58.235949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:08.491 [2024-11-19 11:02:58.235976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.491 [2024-11-19 11:02:58.235990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:08.491 [2024-11-19 11:02:58.235997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.491 [2024-11-19 11:02:58.236290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:08.491 [2024-11-19 11:02:58.236300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:08.491 [2024-11-19 11:02:58.236311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:08.491 [2024-11-19 11:02:58.236318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:08.491 [2024-11-19 11:02:58.236606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:08.491 [2024-11-19 11:02:58.236615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:08.491 [2024-11-19 11:02:58.236626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:08.491 [2024-11-19 11:02:58.236633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:08.491 [2024-11-19 11:02:58.236916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:08.491 [2024-11-19 11:02:58.236927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:08.491 [2024-11-19 11:02:58.236938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:08.491 [2024-11-19 11:02:58.236945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:08.774 passed 00:35:08.774 Test: blockdev nvme passthru rw ...passed 00:35:08.774 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:02:58.318545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:08.774 [2024-11-19 11:02:58.318567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:08.774 [2024-11-19 11:02:58.318681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:08.774 [2024-11-19 11:02:58.318691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:08.774 [2024-11-19 11:02:58.318800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:08.774 [2024-11-19 11:02:58.318809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:08.775 [2024-11-19 11:02:58.318911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:08.775 [2024-11-19 11:02:58.318920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:08.775 passed 00:35:08.775 Test: blockdev nvme admin passthru ...passed 00:35:08.775 Test: blockdev copy ...passed 00:35:08.775 00:35:08.775 Run Summary: Type Total Ran Passed Failed Inactive 00:35:08.775 suites 1 1 n/a 0 0 00:35:08.775 tests 23 23 23 0 0 00:35:08.775 asserts 152 152 152 0 n/a 00:35:08.775 00:35:08.775 Elapsed time = 1.087 seconds 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:08.775 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:08.775 rmmod nvme_tcp 00:35:08.775 rmmod nvme_fabrics 00:35:08.775 rmmod nvme_keyring 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 4165953 ']' 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 4165953 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 4165953 ']' 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 4165953 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4165953 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4165953' 00:35:09.079 killing process with pid 4165953 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 4165953 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 4165953 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:09.079 11:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.618 11:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:11.618 00:35:11.618 real 0m10.528s 00:35:11.618 user 0m8.485s 00:35:11.618 sys 0m5.228s 00:35:11.618 11:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:11.618 11:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:11.618 ************************************ 00:35:11.618 END TEST nvmf_bdevio 00:35:11.618 ************************************ 00:35:11.618 11:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:11.618 00:35:11.618 real 4m34.171s 00:35:11.618 user 9m6.119s 00:35:11.618 sys 1m52.528s 00:35:11.618 11:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:11.618 11:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:11.618 ************************************ 00:35:11.618 END TEST nvmf_target_core_interrupt_mode 00:35:11.618 ************************************ 00:35:11.618 11:03:00 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:11.618 11:03:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:11.618 11:03:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:11.618 11:03:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:11.618 ************************************ 00:35:11.618 START TEST nvmf_interrupt 00:35:11.618 ************************************ 00:35:11.618 11:03:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:11.618 * Looking for test storage... 00:35:11.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:11.618 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:11.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.619 --rc genhtml_branch_coverage=1 00:35:11.619 --rc genhtml_function_coverage=1 00:35:11.619 --rc genhtml_legend=1 00:35:11.619 --rc geninfo_all_blocks=1 00:35:11.619 --rc geninfo_unexecuted_blocks=1 00:35:11.619 00:35:11.619 ' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:11.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.619 --rc genhtml_branch_coverage=1 00:35:11.619 --rc genhtml_function_coverage=1 00:35:11.619 --rc genhtml_legend=1 00:35:11.619 --rc geninfo_all_blocks=1 00:35:11.619 --rc geninfo_unexecuted_blocks=1 00:35:11.619 00:35:11.619 ' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:11.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.619 --rc genhtml_branch_coverage=1 00:35:11.619 --rc genhtml_function_coverage=1 00:35:11.619 --rc genhtml_legend=1 00:35:11.619 --rc geninfo_all_blocks=1 00:35:11.619 --rc geninfo_unexecuted_blocks=1 00:35:11.619 00:35:11.619 ' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:11.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.619 --rc genhtml_branch_coverage=1 00:35:11.619 --rc genhtml_function_coverage=1 00:35:11.619 --rc genhtml_legend=1 00:35:11.619 --rc geninfo_all_blocks=1 00:35:11.619 --rc geninfo_unexecuted_blocks=1 00:35:11.619 00:35:11.619 ' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:11.619 11:03:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:18.191 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:18.191 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:18.191 Found net devices under 0000:86:00.0: cvl_0_0 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:18.191 Found net devices under 0000:86:00.1: cvl_0_1 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:18.191 11:03:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:18.191 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:18.191 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:18.191 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:18.191 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:18.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:18.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:35:18.191 00:35:18.191 --- 10.0.0.2 ping statistics --- 00:35:18.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.191 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:35:18.191 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:18.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:18.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:35:18.191 00:35:18.191 --- 10.0.0.1 ping statistics --- 00:35:18.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.191 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:35:18.191 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:18.191 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:18.191 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:18.191 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=4169887 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 4169887 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 4169887 ']' 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:18.192 [2024-11-19 11:03:07.190073] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:18.192 [2024-11-19 11:03:07.190954] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:35:18.192 [2024-11-19 11:03:07.190987] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:18.192 [2024-11-19 11:03:07.266951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:18.192 [2024-11-19 11:03:07.308094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.192 [2024-11-19 11:03:07.308130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.192 [2024-11-19 11:03:07.308137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.192 [2024-11-19 11:03:07.308143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.192 [2024-11-19 11:03:07.308148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.192 [2024-11-19 11:03:07.309323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.192 [2024-11-19 11:03:07.309333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.192 [2024-11-19 11:03:07.375683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:18.192 [2024-11-19 11:03:07.376230] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:18.192 [2024-11-19 11:03:07.376463] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:18.192 5000+0 records in 00:35:18.192 5000+0 records out 00:35:18.192 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0166062 s, 617 MB/s 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:18.192 AIO0 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:18.192 [2024-11-19 11:03:07.506034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:18.192 [2024-11-19 11:03:07.542357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4169887 0 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4169887 0 idle 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4169887 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4169887 -w 256 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4169887 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0' 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4169887 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4169887 1 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4169887 1 idle 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4169887 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:18.192 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4169887 -w 256 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4169891 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4169891 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=4170058 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4169887 0 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4169887 0 busy 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4169887 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4169887 -w 256 00:35:18.193 11:03:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:18.451 11:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4169887 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:00.25 reactor_0' 00:35:18.451 11:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4169887 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:00.25 reactor_0 00:35:18.451 11:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:18.451 11:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:18.451 11:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:18.451 11:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:18.451 11:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:18.451 11:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:18.451 11:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:19.383 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:19.383 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:19.383 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4169887 -w 256 00:35:19.383 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4169887 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.55 reactor_0' 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4169887 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.55 reactor_0 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4169887 1 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4169887 1 busy 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4169887 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4169887 -w 256 00:35:19.640 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:19.908 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4169891 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.33 reactor_1' 00:35:19.908 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4169891 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.33 reactor_1 00:35:19.908 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:19.908 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:19.908 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:19.908 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:19.908 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:19.908 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:19.908 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:19.908 11:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:19.908 11:03:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 4170058 00:35:29.874 Initializing NVMe Controllers 00:35:29.874 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:29.874 Controller IO queue size 256, less than required. 00:35:29.874 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:29.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:29.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:29.874 Initialization complete. Launching workers. 00:35:29.874 ======================================================== 00:35:29.874 Latency(us) 00:35:29.874 Device Information : IOPS MiB/s Average min max 00:35:29.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16314.66 63.73 15700.26 4262.06 32169.30 00:35:29.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16423.76 64.16 15591.71 7583.26 26686.43 00:35:29.874 ======================================================== 00:35:29.874 Total : 32738.41 127.88 15645.80 4262.06 32169.30 00:35:29.874 00:35:29.874 [2024-11-19 11:03:18.163321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3400 is same with the state(6) to be set 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4169887 0 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4169887 0 idle 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4169887 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4169887 -w 256 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4169887 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0' 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4169887 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4169887 1 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4169887 1 idle 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4169887 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:29.874 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4169887 -w 256 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4169891 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4169891 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:29.875 11:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4169887 0 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4169887 0 idle 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4169887 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4169887 -w 256 00:35:31.254 11:03:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4169887 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.46 reactor_0' 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4169887 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.46 reactor_0 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4169887 1 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4169887 1 idle 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4169887 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4169887 -w 256 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4169891 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1' 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4169891 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:31.513 11:03:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:31.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:31.772 11:03:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:31.772 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:31.772 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:31.772 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:31.772 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:31.772 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:31.772 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:31.772 11:03:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:31.772 11:03:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:31.772 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:31.772 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:32.031 rmmod nvme_tcp 00:35:32.031 rmmod nvme_fabrics 00:35:32.031 rmmod nvme_keyring 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 4169887 ']' 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 4169887 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 4169887 ']' 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 4169887 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4169887 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4169887' 00:35:32.031 killing process with pid 4169887 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 4169887 00:35:32.031 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 4169887 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:32.290 11:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.196 11:03:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:34.196 00:35:34.196 real 0m22.954s 00:35:34.196 user 0m39.821s 00:35:34.196 sys 0m8.345s 00:35:34.196 11:03:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.196 11:03:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:34.196 ************************************ 00:35:34.196 END TEST nvmf_interrupt 00:35:34.196 ************************************ 00:35:34.456 00:35:34.456 real 27m36.853s 00:35:34.456 user 56m50.842s 00:35:34.456 sys 9m17.792s 00:35:34.456 11:03:23 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.456 11:03:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.456 ************************************ 00:35:34.456 END TEST nvmf_tcp 00:35:34.456 ************************************ 00:35:34.456 11:03:24 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:34.456 11:03:24 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:34.456 11:03:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:34.456 11:03:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.456 11:03:24 -- common/autotest_common.sh@10 -- # set +x 00:35:34.456 ************************************ 00:35:34.456 START TEST spdkcli_nvmf_tcp 00:35:34.456 ************************************ 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:34.456 * Looking for test storage... 00:35:34.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:34.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.456 --rc genhtml_branch_coverage=1 00:35:34.456 --rc genhtml_function_coverage=1 00:35:34.456 --rc genhtml_legend=1 00:35:34.456 --rc geninfo_all_blocks=1 00:35:34.456 --rc geninfo_unexecuted_blocks=1 00:35:34.456 00:35:34.456 ' 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:34.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.456 --rc genhtml_branch_coverage=1 00:35:34.456 --rc genhtml_function_coverage=1 00:35:34.456 --rc genhtml_legend=1 00:35:34.456 --rc geninfo_all_blocks=1 00:35:34.456 --rc geninfo_unexecuted_blocks=1 00:35:34.456 00:35:34.456 ' 00:35:34.456 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:34.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.456 --rc genhtml_branch_coverage=1 00:35:34.456 --rc genhtml_function_coverage=1 00:35:34.457 --rc genhtml_legend=1 00:35:34.457 --rc geninfo_all_blocks=1 00:35:34.457 --rc geninfo_unexecuted_blocks=1 00:35:34.457 00:35:34.457 ' 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:34.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.457 --rc genhtml_branch_coverage=1 00:35:34.457 --rc genhtml_function_coverage=1 00:35:34.457 --rc genhtml_legend=1 00:35:34.457 --rc geninfo_all_blocks=1 00:35:34.457 --rc geninfo_unexecuted_blocks=1 00:35:34.457 00:35:34.457 ' 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.457 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:34.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4173227 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4173227 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 4173227 ']' 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:34.717 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.717 [2024-11-19 11:03:24.318546] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:35:34.717 [2024-11-19 11:03:24.318592] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173227 ] 00:35:34.717 [2024-11-19 11:03:24.391526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:34.717 [2024-11-19 11:03:24.434966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.717 [2024-11-19 11:03:24.434967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.977 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.977 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:34.977 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:34.977 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.977 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.977 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:34.977 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:34.977 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:34.977 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:34.977 11:03:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.977 11:03:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:34.977 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:34.977 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:34.977 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:34.977 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:34.977 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:34.977 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:34.977 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:34.977 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:34.977 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:34.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:34.977 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:34.977 ' 00:35:37.511 [2024-11-19 11:03:27.250284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.888 [2024-11-19 11:03:28.582735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:41.422 [2024-11-19 11:03:31.070342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:43.954 [2024-11-19 11:03:33.229046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:45.332 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:45.332 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:45.332 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:45.332 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:45.332 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:45.332 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:45.332 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:45.332 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.332 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.332 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:45.332 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:45.332 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:45.332 11:03:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:45.332 11:03:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.332 11:03:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.332 11:03:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:45.332 11:03:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.332 11:03:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.332 11:03:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:45.332 11:03:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:45.900 11:03:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:45.900 11:03:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:45.900 11:03:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:45.900 11:03:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.900 11:03:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.900 11:03:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:45.900 11:03:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.900 11:03:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.900 11:03:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:45.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:45.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:45.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:45.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:45.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:45.900 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:45.900 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:45.900 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:45.900 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:45.900 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:45.900 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:45.900 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:45.900 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:45.900 ' 00:35:52.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:52.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:52.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:52.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:52.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:52.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:52.464 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:52.464 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:52.464 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:52.464 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:52.464 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:52.464 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:52.464 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:52.464 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4173227 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 4173227 ']' 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 4173227 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4173227 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4173227' 00:35:52.464 killing process with pid 4173227 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 4173227 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 4173227 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4173227 ']' 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4173227 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 4173227 ']' 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 4173227 00:35:52.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4173227) - No such process 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 4173227 is not found' 00:35:52.464 Process with pid 4173227 is not found 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:52.464 00:35:52.464 real 0m17.312s 00:35:52.464 user 0m38.108s 00:35:52.464 sys 0m0.813s 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.464 11:03:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.464 ************************************ 00:35:52.464 END TEST spdkcli_nvmf_tcp 00:35:52.464 ************************************ 00:35:52.464 11:03:41 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:52.464 11:03:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:52.464 11:03:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.464 11:03:41 -- common/autotest_common.sh@10 -- # set +x 00:35:52.464 ************************************ 00:35:52.464 START TEST nvmf_identify_passthru 00:35:52.464 ************************************ 00:35:52.464 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:52.464 * Looking for test storage... 00:35:52.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:52.464 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:52.464 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:35:52.464 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:52.464 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:52.464 11:03:41 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:52.464 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:52.464 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:52.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.464 --rc genhtml_branch_coverage=1 00:35:52.464 --rc genhtml_function_coverage=1 00:35:52.464 --rc genhtml_legend=1 00:35:52.464 --rc geninfo_all_blocks=1 00:35:52.464 --rc geninfo_unexecuted_blocks=1 00:35:52.464 00:35:52.464 ' 00:35:52.464 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:52.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.464 --rc genhtml_branch_coverage=1 00:35:52.464 --rc genhtml_function_coverage=1 00:35:52.464 --rc genhtml_legend=1 00:35:52.464 --rc geninfo_all_blocks=1 00:35:52.464 --rc geninfo_unexecuted_blocks=1 00:35:52.464 00:35:52.464 ' 00:35:52.464 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:52.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.464 --rc genhtml_branch_coverage=1 00:35:52.464 --rc genhtml_function_coverage=1 00:35:52.464 --rc genhtml_legend=1 00:35:52.464 --rc geninfo_all_blocks=1 00:35:52.464 --rc geninfo_unexecuted_blocks=1 00:35:52.464 00:35:52.464 ' 00:35:52.464 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:52.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.464 --rc genhtml_branch_coverage=1 00:35:52.464 --rc genhtml_function_coverage=1 00:35:52.464 --rc genhtml_legend=1 00:35:52.464 --rc geninfo_all_blocks=1 00:35:52.464 --rc geninfo_unexecuted_blocks=1 00:35:52.464 00:35:52.464 ' 00:35:52.464 11:03:41 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.464 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.465 11:03:41 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.465 11:03:41 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.465 11:03:41 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.465 11:03:41 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.465 11:03:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.465 11:03:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.465 11:03:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.465 11:03:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:52.465 11:03:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:52.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:52.465 11:03:41 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.465 11:03:41 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.465 11:03:41 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.465 11:03:41 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.465 11:03:41 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.465 11:03:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.465 11:03:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.465 11:03:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.465 11:03:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:52.465 11:03:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.465 11:03:41 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.465 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:52.465 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:52.465 11:03:41 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:52.465 11:03:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:57.743 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:57.743 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:57.743 Found net devices under 0000:86:00.0: cvl_0_0 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:57.743 Found net devices under 0000:86:00.1: cvl_0_1 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:57.743 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:57.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:57.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:35:57.744 00:35:57.744 --- 10.0.0.2 ping statistics --- 00:35:57.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.744 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:57.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:57.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:35:57.744 00:35:57.744 --- 10.0.0.1 ping statistics --- 00:35:57.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.744 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:57.744 11:03:47 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:58.003 11:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:58.003 11:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:58.003 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:58.004 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:35:58.004 11:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:35:58.004 11:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:35:58.004 11:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:35:58.004 11:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:35:58.004 11:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:58.004 11:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:03.386 11:03:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:36:03.386 11:03:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:03.386 11:03:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:03.386 11:03:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:07.581 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:07.581 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.581 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.581 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4180515 00:36:07.581 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:07.581 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:07.581 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4180515 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 4180515 ']' 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:07.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.581 [2024-11-19 11:03:57.169821] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:36:07.581 [2024-11-19 11:03:57.169869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:07.581 [2024-11-19 11:03:57.230280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:07.581 [2024-11-19 11:03:57.273795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:07.581 [2024-11-19 11:03:57.273831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:07.581 [2024-11-19 11:03:57.273838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:07.581 [2024-11-19 11:03:57.273844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:07.581 [2024-11-19 11:03:57.273849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:07.581 [2024-11-19 11:03:57.275485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:07.581 [2024-11-19 11:03:57.275605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:07.581 [2024-11-19 11:03:57.275714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:07.581 [2024-11-19 11:03:57.275716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:07.581 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.581 INFO: Log level set to 20 00:36:07.581 INFO: Requests: 00:36:07.581 { 00:36:07.581 "jsonrpc": "2.0", 00:36:07.581 "method": "nvmf_set_config", 00:36:07.581 "id": 1, 00:36:07.581 "params": { 00:36:07.581 "admin_cmd_passthru": { 00:36:07.581 "identify_ctrlr": true 00:36:07.581 } 00:36:07.581 } 00:36:07.581 } 00:36:07.581 00:36:07.581 INFO: response: 00:36:07.581 { 00:36:07.581 "jsonrpc": "2.0", 00:36:07.581 "id": 1, 00:36:07.581 "result": true 00:36:07.581 } 00:36:07.581 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.581 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.581 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.581 INFO: Setting log level to 20 00:36:07.581 INFO: Setting log level to 20 00:36:07.581 INFO: Log level set to 20 00:36:07.581 INFO: Log level set to 20 00:36:07.581 INFO: Requests: 00:36:07.581 { 00:36:07.581 "jsonrpc": "2.0", 00:36:07.581 "method": "framework_start_init", 00:36:07.581 "id": 1 00:36:07.581 } 00:36:07.581 00:36:07.581 INFO: Requests: 00:36:07.581 { 00:36:07.581 "jsonrpc": "2.0", 00:36:07.581 "method": "framework_start_init", 00:36:07.581 "id": 1 00:36:07.581 } 00:36:07.581 00:36:07.838 [2024-11-19 11:03:57.399281] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:07.838 INFO: response: 00:36:07.838 { 00:36:07.838 "jsonrpc": "2.0", 00:36:07.838 "id": 1, 00:36:07.838 "result": true 00:36:07.838 } 00:36:07.838 00:36:07.838 INFO: response: 00:36:07.838 { 00:36:07.838 "jsonrpc": "2.0", 00:36:07.838 "id": 1, 00:36:07.839 "result": true 00:36:07.839 } 00:36:07.839 00:36:07.839 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.839 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:07.839 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.839 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.839 INFO: Setting log level to 40 00:36:07.839 INFO: Setting log level to 40 00:36:07.839 INFO: Setting log level to 40 00:36:07.839 [2024-11-19 11:03:57.412606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:07.839 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.839 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:07.839 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.839 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.839 11:03:57 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:36:07.839 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.839 11:03:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:11.121 Nvme0n1 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:11.121 [2024-11-19 11:04:00.321732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:11.121 [ 00:36:11.121 { 00:36:11.121 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:11.121 "subtype": "Discovery", 00:36:11.121 "listen_addresses": [], 00:36:11.121 "allow_any_host": true, 00:36:11.121 "hosts": [] 00:36:11.121 }, 00:36:11.121 { 00:36:11.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:11.121 "subtype": "NVMe", 00:36:11.121 "listen_addresses": [ 00:36:11.121 { 00:36:11.121 "trtype": "TCP", 00:36:11.121 "adrfam": "IPv4", 00:36:11.121 "traddr": "10.0.0.2", 00:36:11.121 "trsvcid": "4420" 00:36:11.121 } 00:36:11.121 ], 00:36:11.121 "allow_any_host": true, 00:36:11.121 "hosts": [], 00:36:11.121 "serial_number": "SPDK00000000000001", 00:36:11.121 "model_number": "SPDK bdev Controller", 00:36:11.121 "max_namespaces": 1, 00:36:11.121 "min_cntlid": 1, 00:36:11.121 "max_cntlid": 65519, 00:36:11.121 "namespaces": [ 00:36:11.121 { 00:36:11.121 "nsid": 1, 00:36:11.121 "bdev_name": "Nvme0n1", 00:36:11.121 "name": "Nvme0n1", 00:36:11.121 "nguid": "0EDDBD637BD845CCBE7B96DF3D10DA7D", 00:36:11.121 "uuid": "0eddbd63-7bd8-45cc-be7b-96df3d10da7d" 00:36:11.121 } 00:36:11.121 ] 00:36:11.121 } 00:36:11.121 ] 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:11.121 11:04:00 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:11.121 11:04:00 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:11.121 11:04:00 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:11.121 11:04:00 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:11.121 11:04:00 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:11.121 11:04:00 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:11.121 11:04:00 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:11.121 rmmod nvme_tcp 00:36:11.121 rmmod nvme_fabrics 00:36:11.121 rmmod nvme_keyring 00:36:11.121 11:04:00 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:11.121 11:04:00 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:11.121 11:04:00 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:11.121 11:04:00 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 4180515 ']' 00:36:11.121 11:04:00 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 4180515 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 4180515 ']' 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 4180515 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:11.121 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4180515 00:36:11.379 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:11.379 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:11.379 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4180515' 00:36:11.379 killing process with pid 4180515 00:36:11.379 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 4180515 00:36:11.379 11:04:00 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 4180515 00:36:13.279 11:04:02 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:13.279 11:04:02 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:13.280 11:04:02 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:13.280 11:04:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:13.280 11:04:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:13.280 11:04:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:13.280 11:04:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:13.280 11:04:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:13.280 11:04:02 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:13.280 11:04:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.280 11:04:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:13.280 11:04:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.186 11:04:04 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:15.186 00:36:15.186 real 0m23.481s 00:36:15.186 user 0m29.940s 00:36:15.186 sys 0m6.318s 00:36:15.186 11:04:04 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.186 11:04:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:15.186 ************************************ 00:36:15.186 END TEST nvmf_identify_passthru 00:36:15.186 ************************************ 00:36:15.186 11:04:04 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:15.186 11:04:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:15.186 11:04:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.186 11:04:04 -- common/autotest_common.sh@10 -- # set +x 00:36:15.445 ************************************ 00:36:15.445 START TEST nvmf_dif 00:36:15.445 ************************************ 00:36:15.445 11:04:04 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:15.445 * Looking for test storage... 00:36:15.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:15.445 11:04:05 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:15.445 11:04:05 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:15.445 11:04:05 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:15.445 11:04:05 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.445 11:04:05 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:15.446 11:04:05 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.446 11:04:05 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:15.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.446 --rc genhtml_branch_coverage=1 00:36:15.446 --rc genhtml_function_coverage=1 00:36:15.446 --rc genhtml_legend=1 00:36:15.446 --rc geninfo_all_blocks=1 00:36:15.446 --rc geninfo_unexecuted_blocks=1 00:36:15.446 00:36:15.446 ' 00:36:15.446 11:04:05 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:15.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.446 --rc genhtml_branch_coverage=1 00:36:15.446 --rc genhtml_function_coverage=1 00:36:15.446 --rc genhtml_legend=1 00:36:15.446 --rc geninfo_all_blocks=1 00:36:15.446 --rc geninfo_unexecuted_blocks=1 00:36:15.446 00:36:15.446 ' 00:36:15.446 11:04:05 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:15.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.446 --rc genhtml_branch_coverage=1 00:36:15.446 --rc genhtml_function_coverage=1 00:36:15.446 --rc genhtml_legend=1 00:36:15.446 --rc geninfo_all_blocks=1 00:36:15.446 --rc geninfo_unexecuted_blocks=1 00:36:15.446 00:36:15.446 ' 00:36:15.446 11:04:05 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:15.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.446 --rc genhtml_branch_coverage=1 00:36:15.446 --rc genhtml_function_coverage=1 00:36:15.446 --rc genhtml_legend=1 00:36:15.446 --rc geninfo_all_blocks=1 00:36:15.446 --rc geninfo_unexecuted_blocks=1 00:36:15.446 00:36:15.446 ' 00:36:15.446 11:04:05 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.446 11:04:05 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.446 11:04:05 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.446 11:04:05 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.446 11:04:05 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.446 11:04:05 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.446 11:04:05 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.446 11:04:05 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.446 11:04:05 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:15.446 11:04:05 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:15.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.446 11:04:05 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:15.446 11:04:05 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:15.446 11:04:05 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:15.446 11:04:05 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:15.446 11:04:05 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.446 11:04:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:15.446 11:04:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:15.446 11:04:05 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:15.446 11:04:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:22.014 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:22.014 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:22.014 Found net devices under 0000:86:00.0: cvl_0_0 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:22.014 Found net devices under 0000:86:00.1: cvl_0_1 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:22.014 11:04:10 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:22.015 11:04:10 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.015 11:04:11 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:22.015 11:04:11 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:22.015 11:04:11 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:22.015 11:04:11 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:22.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:36:22.015 00:36:22.015 --- 10.0.0.2 ping statistics --- 00:36:22.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.015 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:36:22.015 11:04:11 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:22.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:36:22.015 00:36:22.015 --- 10.0.0.1 ping statistics --- 00:36:22.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.015 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:36:22.015 11:04:11 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.015 11:04:11 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:22.015 11:04:11 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:22.015 11:04:11 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:24.552 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:24.552 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:24.552 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:24.552 11:04:13 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:24.552 11:04:13 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:24.552 11:04:13 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:24.552 11:04:13 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:24.552 11:04:13 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:24.552 11:04:13 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:24.552 11:04:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:24.552 11:04:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:24.552 11:04:13 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:24.552 11:04:13 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:24.552 11:04:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.552 11:04:13 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=4186198 00:36:24.552 11:04:13 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 4186198 00:36:24.552 11:04:13 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:24.552 11:04:13 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 4186198 ']' 00:36:24.552 11:04:13 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.552 11:04:13 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.552 11:04:13 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.552 11:04:13 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.552 11:04:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.553 [2024-11-19 11:04:14.040235] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:36:24.553 [2024-11-19 11:04:14.040272] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:24.553 [2024-11-19 11:04:14.100254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.553 [2024-11-19 11:04:14.141943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.553 [2024-11-19 11:04:14.141979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.553 [2024-11-19 11:04:14.141986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.553 [2024-11-19 11:04:14.141993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.553 [2024-11-19 11:04:14.141998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.553 [2024-11-19 11:04:14.142558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.553 11:04:14 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.553 11:04:14 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:24.553 11:04:14 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:24.553 11:04:14 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:24.553 11:04:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.553 11:04:14 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:24.553 11:04:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:24.553 11:04:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:24.553 11:04:14 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.553 11:04:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.553 [2024-11-19 11:04:14.276766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.553 11:04:14 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.553 11:04:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:24.553 11:04:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:24.553 11:04:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.553 11:04:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.553 ************************************ 00:36:24.553 START TEST fio_dif_1_default 00:36:24.553 ************************************ 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.553 bdev_null0 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.553 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.812 [2024-11-19 11:04:14.349103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:24.812 { 00:36:24.812 "params": { 00:36:24.812 "name": "Nvme$subsystem", 00:36:24.812 "trtype": "$TEST_TRANSPORT", 00:36:24.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.812 "adrfam": "ipv4", 00:36:24.812 "trsvcid": "$NVMF_PORT", 00:36:24.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.812 "hdgst": ${hdgst:-false}, 00:36:24.812 "ddgst": ${ddgst:-false} 00:36:24.812 }, 00:36:24.812 "method": "bdev_nvme_attach_controller" 00:36:24.812 } 00:36:24.812 EOF 00:36:24.812 )") 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:24.812 "params": { 00:36:24.812 "name": "Nvme0", 00:36:24.812 "trtype": "tcp", 00:36:24.812 "traddr": "10.0.0.2", 00:36:24.812 "adrfam": "ipv4", 00:36:24.812 "trsvcid": "4420", 00:36:24.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:24.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:24.812 "hdgst": false, 00:36:24.812 "ddgst": false 00:36:24.812 }, 00:36:24.812 "method": "bdev_nvme_attach_controller" 00:36:24.812 }' 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:24.812 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:24.813 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:24.813 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:24.813 11:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.071 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:25.071 fio-3.35 00:36:25.071 Starting 1 thread 00:36:37.279 00:36:37.279 filename0: (groupid=0, jobs=1): err= 0: pid=4186571: Tue Nov 19 11:04:25 2024 00:36:37.279 read: IOPS=96, BW=386KiB/s (396kB/s)(3872KiB/10021msec) 00:36:37.279 slat (nsec): min=5853, max=41199, avg=6274.92, stdev=1318.00 00:36:37.279 clat (usec): min=401, max=42562, avg=41387.70, stdev=2686.39 00:36:37.279 lat (usec): min=407, max=42568, avg=41393.97, stdev=2686.42 00:36:37.279 clat percentiles (usec): 00:36:37.279 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:37.279 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:36:37.279 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:37.279 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:37.279 | 99.99th=[42730] 00:36:37.279 bw ( KiB/s): min= 384, max= 416, per=99.64%, avg=385.60, stdev= 7.16, samples=20 00:36:37.279 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:36:37.279 lat (usec) : 500=0.41% 00:36:37.279 lat (msec) : 50=99.59% 00:36:37.279 cpu : usr=92.42%, sys=7.32%, ctx=16, majf=0, minf=0 00:36:37.279 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.279 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.279 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:37.279 00:36:37.279 Run status group 0 (all jobs): 00:36:37.279 READ: bw=386KiB/s (396kB/s), 386KiB/s-386KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10021-10021msec 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.279 00:36:37.279 real 0m11.205s 00:36:37.279 user 0m16.395s 00:36:37.279 sys 0m1.027s 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:37.279 ************************************ 00:36:37.279 END TEST fio_dif_1_default 00:36:37.279 ************************************ 00:36:37.279 11:04:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:37.279 11:04:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:37.279 11:04:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:37.279 11:04:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:37.279 ************************************ 00:36:37.279 START TEST fio_dif_1_multi_subsystems 00:36:37.279 ************************************ 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:37.279 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.280 bdev_null0 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.280 [2024-11-19 11:04:25.630581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.280 bdev_null1 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:37.280 { 00:36:37.280 "params": { 00:36:37.280 "name": "Nvme$subsystem", 00:36:37.280 "trtype": "$TEST_TRANSPORT", 00:36:37.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:37.280 "adrfam": "ipv4", 00:36:37.280 "trsvcid": "$NVMF_PORT", 00:36:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:37.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:37.280 "hdgst": ${hdgst:-false}, 00:36:37.280 "ddgst": ${ddgst:-false} 00:36:37.280 }, 00:36:37.280 "method": "bdev_nvme_attach_controller" 00:36:37.280 } 00:36:37.280 EOF 00:36:37.280 )") 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:37.280 { 00:36:37.280 "params": { 00:36:37.280 "name": "Nvme$subsystem", 00:36:37.280 "trtype": "$TEST_TRANSPORT", 00:36:37.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:37.280 "adrfam": "ipv4", 00:36:37.280 "trsvcid": "$NVMF_PORT", 00:36:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:37.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:37.280 "hdgst": ${hdgst:-false}, 00:36:37.280 "ddgst": ${ddgst:-false} 00:36:37.280 }, 00:36:37.280 "method": "bdev_nvme_attach_controller" 00:36:37.280 } 00:36:37.280 EOF 00:36:37.280 )") 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:37.280 "params": { 00:36:37.280 "name": "Nvme0", 00:36:37.280 "trtype": "tcp", 00:36:37.280 "traddr": "10.0.0.2", 00:36:37.280 "adrfam": "ipv4", 00:36:37.280 "trsvcid": "4420", 00:36:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:37.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:37.280 "hdgst": false, 00:36:37.280 "ddgst": false 00:36:37.280 }, 00:36:37.280 "method": "bdev_nvme_attach_controller" 00:36:37.280 },{ 00:36:37.280 "params": { 00:36:37.280 "name": "Nvme1", 00:36:37.280 "trtype": "tcp", 00:36:37.280 "traddr": "10.0.0.2", 00:36:37.280 "adrfam": "ipv4", 00:36:37.280 "trsvcid": "4420", 00:36:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:37.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:37.280 "hdgst": false, 00:36:37.280 "ddgst": false 00:36:37.280 }, 00:36:37.280 "method": "bdev_nvme_attach_controller" 00:36:37.280 }' 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:37.280 11:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:37.281 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:37.281 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:37.281 fio-3.35 00:36:37.281 Starting 2 threads 00:36:47.275 00:36:47.275 filename0: (groupid=0, jobs=1): err= 0: pid=4188537: Tue Nov 19 11:04:36 2024 00:36:47.275 read: IOPS=197, BW=789KiB/s (808kB/s)(7904KiB/10020msec) 00:36:47.275 slat (nsec): min=5838, max=22618, avg=6941.41, stdev=1868.73 00:36:47.275 clat (usec): min=380, max=42584, avg=20261.89, stdev=20441.24 00:36:47.275 lat (usec): min=386, max=42591, avg=20268.83, stdev=20440.71 00:36:47.275 clat percentiles (usec): 00:36:47.275 | 1.00th=[ 396], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 420], 00:36:47.275 | 30.00th=[ 441], 40.00th=[ 545], 50.00th=[ 914], 60.00th=[40633], 00:36:47.275 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:36:47.276 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:47.276 | 99.99th=[42730] 00:36:47.276 bw ( KiB/s): min= 704, max= 896, per=50.30%, avg=788.80, stdev=44.38, samples=20 00:36:47.276 iops : min= 176, max= 224, avg=197.20, stdev=11.10, samples=20 00:36:47.276 lat (usec) : 500=39.12%, 750=10.27%, 1000=1.87% 00:36:47.276 lat (msec) : 2=0.35%, 50=48.38% 00:36:47.276 cpu : usr=96.67%, sys=3.07%, ctx=14, majf=0, minf=21 00:36:47.276 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:47.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.276 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:47.276 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:47.276 filename1: (groupid=0, jobs=1): err= 0: pid=4188538: Tue Nov 19 11:04:36 2024 00:36:47.276 read: IOPS=194, BW=779KiB/s (798kB/s)(7824KiB/10040msec) 00:36:47.276 slat (nsec): min=5840, max=21407, avg=7008.33, stdev=1968.40 00:36:47.276 clat (usec): min=395, max=42500, avg=20510.02, stdev=20461.33 00:36:47.276 lat (usec): min=401, max=42507, avg=20517.03, stdev=20460.73 00:36:47.276 clat percentiles (usec): 00:36:47.276 | 1.00th=[ 449], 5.00th=[ 478], 10.00th=[ 494], 20.00th=[ 603], 00:36:47.276 | 30.00th=[ 611], 40.00th=[ 619], 50.00th=[ 947], 60.00th=[41157], 00:36:47.276 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:36:47.276 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:47.276 | 99.99th=[42730] 00:36:47.276 bw ( KiB/s): min= 704, max= 832, per=49.79%, avg=780.80, stdev=33.48, samples=20 00:36:47.276 iops : min= 176, max= 208, avg=195.20, stdev= 8.37, samples=20 00:36:47.276 lat (usec) : 500=11.81%, 750=37.68%, 1000=1.12% 00:36:47.276 lat (msec) : 2=0.72%, 50=48.67% 00:36:47.276 cpu : usr=96.65%, sys=3.09%, ctx=13, majf=0, minf=29 00:36:47.276 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:47.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.276 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:47.276 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:47.276 00:36:47.276 Run status group 0 (all jobs): 00:36:47.276 READ: bw=1567KiB/s (1604kB/s), 779KiB/s-789KiB/s (798kB/s-808kB/s), io=15.4MiB (16.1MB), run=10020-10040msec 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.535 00:36:47.535 real 0m11.518s 00:36:47.535 user 0m26.605s 00:36:47.535 sys 0m0.934s 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:47.535 11:04:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:47.535 ************************************ 00:36:47.535 END TEST fio_dif_1_multi_subsystems 00:36:47.535 ************************************ 00:36:47.535 11:04:37 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:47.535 11:04:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:47.535 11:04:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:47.535 11:04:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:47.535 ************************************ 00:36:47.535 START TEST fio_dif_rand_params 00:36:47.535 ************************************ 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.535 bdev_null0 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.535 [2024-11-19 11:04:37.226409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:47.535 { 00:36:47.535 "params": { 00:36:47.535 "name": "Nvme$subsystem", 00:36:47.535 "trtype": "$TEST_TRANSPORT", 00:36:47.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.535 "adrfam": "ipv4", 00:36:47.535 "trsvcid": "$NVMF_PORT", 00:36:47.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.535 "hdgst": ${hdgst:-false}, 00:36:47.535 "ddgst": ${ddgst:-false} 00:36:47.535 }, 00:36:47.535 "method": "bdev_nvme_attach_controller" 00:36:47.535 } 00:36:47.535 EOF 00:36:47.535 )") 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.535 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:47.536 "params": { 00:36:47.536 "name": "Nvme0", 00:36:47.536 "trtype": "tcp", 00:36:47.536 "traddr": "10.0.0.2", 00:36:47.536 "adrfam": "ipv4", 00:36:47.536 "trsvcid": "4420", 00:36:47.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:47.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:47.536 "hdgst": false, 00:36:47.536 "ddgst": false 00:36:47.536 }, 00:36:47.536 "method": "bdev_nvme_attach_controller" 00:36:47.536 }' 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:47.536 11:04:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:48.101 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:48.101 ... 00:36:48.101 fio-3.35 00:36:48.101 Starting 3 threads 00:36:54.664 00:36:54.664 filename0: (groupid=0, jobs=1): err= 0: pid=4190382: Tue Nov 19 11:04:43 2024 00:36:54.664 read: IOPS=312, BW=39.0MiB/s (40.9MB/s)(195MiB/5006msec) 00:36:54.664 slat (nsec): min=6009, max=36629, avg=10577.48, stdev=2148.09 00:36:54.664 clat (usec): min=3467, max=49647, avg=9593.68, stdev=7061.05 00:36:54.664 lat (usec): min=3473, max=49659, avg=9604.26, stdev=7060.94 00:36:54.664 clat percentiles (usec): 00:36:54.664 | 1.00th=[ 3851], 5.00th=[ 6194], 10.00th=[ 6915], 20.00th=[ 7504], 00:36:54.664 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:36:54.664 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[10552], 00:36:54.664 | 99.00th=[47973], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:36:54.664 | 99.99th=[49546] 00:36:54.664 bw ( KiB/s): min=15360, max=47360, per=33.78%, avg=39936.00, stdev=10091.00, samples=10 00:36:54.664 iops : min= 120, max= 370, avg=312.00, stdev=78.84, samples=10 00:36:54.664 lat (msec) : 4=1.15%, 10=89.64%, 20=5.95%, 50=3.26% 00:36:54.664 cpu : usr=94.35%, sys=5.35%, ctx=7, majf=0, minf=38 00:36:54.664 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:54.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.664 issued rwts: total=1563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:54.664 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:54.664 filename0: (groupid=0, jobs=1): err= 0: pid=4190383: Tue Nov 19 11:04:43 2024 00:36:54.664 read: IOPS=301, BW=37.6MiB/s (39.5MB/s)(188MiB/5005msec) 00:36:54.664 slat (nsec): min=6007, max=41040, avg=10620.56, stdev=2046.57 00:36:54.664 clat (usec): min=3435, max=51097, avg=9949.61, stdev=6317.79 00:36:54.664 lat (usec): min=3441, max=51109, avg=9960.23, stdev=6317.57 00:36:54.664 clat percentiles (usec): 00:36:54.664 | 1.00th=[ 4621], 5.00th=[ 6063], 10.00th=[ 7046], 20.00th=[ 7963], 00:36:54.665 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:36:54.665 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:36:54.665 | 99.00th=[45876], 99.50th=[48497], 99.90th=[51119], 99.95th=[51119], 00:36:54.665 | 99.99th=[51119] 00:36:54.665 bw ( KiB/s): min=14592, max=44032, per=32.57%, avg=38502.40, stdev=8740.90, samples=10 00:36:54.665 iops : min= 114, max= 344, avg=300.80, stdev=68.29, samples=10 00:36:54.665 lat (msec) : 4=0.60%, 10=74.45%, 20=22.16%, 50=2.65%, 100=0.13% 00:36:54.665 cpu : usr=95.02%, sys=4.68%, ctx=8, majf=0, minf=73 00:36:54.665 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:54.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.665 issued rwts: total=1507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:54.665 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:54.665 filename0: (groupid=0, jobs=1): err= 0: pid=4190384: Tue Nov 19 11:04:43 2024 00:36:54.665 read: IOPS=310, BW=38.8MiB/s (40.7MB/s)(194MiB/5004msec) 00:36:54.665 slat (nsec): min=5997, max=27330, avg=10371.93, stdev=1968.61 00:36:54.665 clat (usec): min=3101, max=50122, avg=9652.91, stdev=5387.40 00:36:54.665 lat (usec): min=3107, max=50150, avg=9663.28, stdev=5387.37 00:36:54.665 clat percentiles (usec): 00:36:54.665 | 1.00th=[ 3523], 5.00th=[ 4555], 10.00th=[ 6259], 20.00th=[ 7832], 00:36:54.665 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9765], 00:36:54.665 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11076], 95.00th=[11469], 00:36:54.665 | 99.00th=[44303], 99.50th=[45351], 99.90th=[50070], 99.95th=[50070], 00:36:54.665 | 99.99th=[50070] 00:36:54.665 bw ( KiB/s): min=27648, max=44288, per=33.57%, avg=39680.00, stdev=4545.95, samples=10 00:36:54.665 iops : min= 216, max= 346, avg=310.00, stdev=35.52, samples=10 00:36:54.665 lat (msec) : 4=3.54%, 10=62.78%, 20=31.75%, 50=1.74%, 100=0.19% 00:36:54.665 cpu : usr=94.48%, sys=5.22%, ctx=12, majf=0, minf=21 00:36:54.665 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:54.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.665 issued rwts: total=1553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:54.665 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:54.665 00:36:54.665 Run status group 0 (all jobs): 00:36:54.665 READ: bw=115MiB/s (121MB/s), 37.6MiB/s-39.0MiB/s (39.5MB/s-40.9MB/s), io=578MiB (606MB), run=5004-5006msec 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 bdev_null0 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 [2024-11-19 11:04:43.437452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 bdev_null1 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 bdev_null2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:54.665 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:54.665 { 00:36:54.665 "params": { 00:36:54.665 "name": "Nvme$subsystem", 00:36:54.665 "trtype": "$TEST_TRANSPORT", 00:36:54.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:54.665 "adrfam": "ipv4", 00:36:54.665 "trsvcid": "$NVMF_PORT", 00:36:54.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:54.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:54.666 "hdgst": ${hdgst:-false}, 00:36:54.666 "ddgst": ${ddgst:-false} 00:36:54.666 }, 00:36:54.666 "method": "bdev_nvme_attach_controller" 00:36:54.666 } 00:36:54.666 EOF 00:36:54.666 )") 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:54.666 { 00:36:54.666 "params": { 00:36:54.666 "name": "Nvme$subsystem", 00:36:54.666 "trtype": "$TEST_TRANSPORT", 00:36:54.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:54.666 "adrfam": "ipv4", 00:36:54.666 "trsvcid": "$NVMF_PORT", 00:36:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:54.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:54.666 "hdgst": ${hdgst:-false}, 00:36:54.666 "ddgst": ${ddgst:-false} 00:36:54.666 }, 00:36:54.666 "method": "bdev_nvme_attach_controller" 00:36:54.666 } 00:36:54.666 EOF 00:36:54.666 )") 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:54.666 { 00:36:54.666 "params": { 00:36:54.666 "name": "Nvme$subsystem", 00:36:54.666 "trtype": "$TEST_TRANSPORT", 00:36:54.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:54.666 "adrfam": "ipv4", 00:36:54.666 "trsvcid": "$NVMF_PORT", 00:36:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:54.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:54.666 "hdgst": ${hdgst:-false}, 00:36:54.666 "ddgst": ${ddgst:-false} 00:36:54.666 }, 00:36:54.666 "method": "bdev_nvme_attach_controller" 00:36:54.666 } 00:36:54.666 EOF 00:36:54.666 )") 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:54.666 "params": { 00:36:54.666 "name": "Nvme0", 00:36:54.666 "trtype": "tcp", 00:36:54.666 "traddr": "10.0.0.2", 00:36:54.666 "adrfam": "ipv4", 00:36:54.666 "trsvcid": "4420", 00:36:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:54.666 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:54.666 "hdgst": false, 00:36:54.666 "ddgst": false 00:36:54.666 }, 00:36:54.666 "method": "bdev_nvme_attach_controller" 00:36:54.666 },{ 00:36:54.666 "params": { 00:36:54.666 "name": "Nvme1", 00:36:54.666 "trtype": "tcp", 00:36:54.666 "traddr": "10.0.0.2", 00:36:54.666 "adrfam": "ipv4", 00:36:54.666 "trsvcid": "4420", 00:36:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:54.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:54.666 "hdgst": false, 00:36:54.666 "ddgst": false 00:36:54.666 }, 00:36:54.666 "method": "bdev_nvme_attach_controller" 00:36:54.666 },{ 00:36:54.666 "params": { 00:36:54.666 "name": "Nvme2", 00:36:54.666 "trtype": "tcp", 00:36:54.666 "traddr": "10.0.0.2", 00:36:54.666 "adrfam": "ipv4", 00:36:54.666 "trsvcid": "4420", 00:36:54.666 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:54.666 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:54.666 "hdgst": false, 00:36:54.666 "ddgst": false 00:36:54.666 }, 00:36:54.666 "method": "bdev_nvme_attach_controller" 00:36:54.666 }' 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:54.666 11:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:54.666 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:54.666 ... 00:36:54.666 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:54.666 ... 00:36:54.666 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:54.666 ... 00:36:54.666 fio-3.35 00:36:54.666 Starting 24 threads 00:37:06.864 00:37:06.864 filename0: (groupid=0, jobs=1): err= 0: pid=4191550: Tue Nov 19 11:04:54 2024 00:37:06.864 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10023msec) 00:37:06.864 slat (nsec): min=7311, max=94321, avg=30109.53, stdev=21515.43 00:37:06.864 clat (usec): min=11458, max=31284, avg=29747.88, stdev=1326.99 00:37:06.864 lat (usec): min=11473, max=31297, avg=29777.99, stdev=1325.82 00:37:06.864 clat percentiles (usec): 00:37:06.864 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:37:06.865 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:06.865 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:06.865 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:37:06.865 | 99.99th=[31327] 00:37:06.865 bw ( KiB/s): min= 2048, max= 2180, per=4.17%, avg=2131.40, stdev=62.79, samples=20 00:37:06.865 iops : min= 512, max= 545, avg=532.85, stdev=15.70, samples=20 00:37:06.865 lat (msec) : 20=0.60%, 50=99.40% 00:37:06.865 cpu : usr=98.70%, sys=0.94%, ctx=9, majf=0, minf=10 00:37:06.865 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.865 filename0: (groupid=0, jobs=1): err= 0: pid=4191551: Tue Nov 19 11:04:54 2024 00:37:06.865 read: IOPS=531, BW=2127KiB/s (2178kB/s)(20.8MiB/10022msec) 00:37:06.865 slat (nsec): min=5985, max=35052, avg=16086.44, stdev=4802.99 00:37:06.865 clat (usec): min=16654, max=46589, avg=29953.85, stdev=996.04 00:37:06.865 lat (usec): min=16685, max=46605, avg=29969.94, stdev=995.83 00:37:06.865 clat percentiles (usec): 00:37:06.865 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:37:06.865 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:37:06.865 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:06.865 | 99.00th=[31065], 99.50th=[31327], 99.90th=[44303], 99.95th=[44303], 00:37:06.865 | 99.99th=[46400] 00:37:06.865 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2122.11, stdev=64.93, samples=19 00:37:06.865 iops : min= 512, max= 544, avg=530.53, stdev=16.23, samples=19 00:37:06.865 lat (msec) : 20=0.19%, 50=99.81% 00:37:06.865 cpu : usr=98.63%, sys=1.00%, ctx=17, majf=0, minf=9 00:37:06.865 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.865 filename0: (groupid=0, jobs=1): err= 0: pid=4191552: Tue Nov 19 11:04:54 2024 00:37:06.865 read: IOPS=534, BW=2136KiB/s (2188kB/s)(20.9MiB/10006msec) 00:37:06.865 slat (usec): min=7, max=103, avg=37.31, stdev=22.09 00:37:06.865 clat (usec): min=8909, max=52686, avg=29574.88, stdev=2429.32 00:37:06.865 lat (usec): min=8918, max=52728, avg=29612.19, stdev=2431.96 00:37:06.865 clat percentiles (usec): 00:37:06.865 | 1.00th=[20317], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:06.865 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:06.865 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.865 | 99.00th=[37487], 99.50th=[44303], 99.90th=[52691], 99.95th=[52691], 00:37:06.865 | 99.99th=[52691] 00:37:06.865 bw ( KiB/s): min= 1920, max= 2400, per=4.18%, avg=2135.20, stdev=104.01, samples=20 00:37:06.865 iops : min= 480, max= 600, avg=533.80, stdev=26.00, samples=20 00:37:06.865 lat (msec) : 10=0.13%, 20=0.84%, 50=98.73%, 100=0.30% 00:37:06.865 cpu : usr=98.58%, sys=1.05%, ctx=11, majf=0, minf=9 00:37:06.865 IO depths : 1=5.5%, 2=11.2%, 4=22.9%, 8=53.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:06.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.865 filename0: (groupid=0, jobs=1): err= 0: pid=4191553: Tue Nov 19 11:04:54 2024 00:37:06.865 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10007msec) 00:37:06.865 slat (usec): min=7, max=109, avg=40.93, stdev=21.85 00:37:06.865 clat (usec): min=12737, max=52893, avg=29712.13, stdev=1595.57 00:37:06.865 lat (usec): min=12752, max=52915, avg=29753.06, stdev=1596.79 00:37:06.865 clat percentiles (usec): 00:37:06.865 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:06.865 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:06.865 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.865 | 99.00th=[30802], 99.50th=[31065], 99.90th=[52691], 99.95th=[52691], 00:37:06.865 | 99.99th=[52691] 00:37:06.865 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2118.40, stdev=77.42, samples=20 00:37:06.865 iops : min= 480, max= 544, avg=529.60, stdev=19.35, samples=20 00:37:06.865 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:37:06.865 cpu : usr=98.41%, sys=1.20%, ctx=12, majf=0, minf=11 00:37:06.865 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.865 filename0: (groupid=0, jobs=1): err= 0: pid=4191554: Tue Nov 19 11:04:54 2024 00:37:06.865 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10007msec) 00:37:06.865 slat (nsec): min=5012, max=90383, avg=36325.58, stdev=20523.83 00:37:06.865 clat (usec): min=15781, max=59676, avg=29769.10, stdev=1393.57 00:37:06.865 lat (usec): min=15811, max=59691, avg=29805.42, stdev=1394.08 00:37:06.865 clat percentiles (usec): 00:37:06.865 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:37:06.865 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:06.865 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.865 | 99.00th=[30802], 99.50th=[31065], 99.90th=[48497], 99.95th=[48497], 00:37:06.865 | 99.99th=[59507] 00:37:06.865 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2117.95, stdev=77.09, samples=20 00:37:06.865 iops : min= 480, max= 544, avg=529.45, stdev=19.25, samples=20 00:37:06.865 lat (msec) : 20=0.30%, 50=99.66%, 100=0.04% 00:37:06.865 cpu : usr=98.48%, sys=1.14%, ctx=14, majf=0, minf=9 00:37:06.865 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.865 filename0: (groupid=0, jobs=1): err= 0: pid=4191555: Tue Nov 19 11:04:54 2024 00:37:06.865 read: IOPS=530, BW=2124KiB/s (2174kB/s)(20.8MiB/10006msec) 00:37:06.865 slat (usec): min=4, max=100, avg=40.42, stdev=20.55 00:37:06.865 clat (usec): min=28420, max=35075, avg=29746.29, stdev=421.88 00:37:06.865 lat (usec): min=28438, max=35091, avg=29786.70, stdev=423.85 00:37:06.865 clat percentiles (usec): 00:37:06.865 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:06.865 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:06.865 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.865 | 99.00th=[30802], 99.50th=[31327], 99.90th=[34866], 99.95th=[34866], 00:37:06.865 | 99.99th=[34866] 00:37:06.865 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2122.11, stdev=64.93, samples=19 00:37:06.865 iops : min= 512, max= 544, avg=530.53, stdev=16.23, samples=19 00:37:06.865 lat (msec) : 50=100.00% 00:37:06.865 cpu : usr=98.41%, sys=1.22%, ctx=9, majf=0, minf=9 00:37:06.865 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.865 filename0: (groupid=0, jobs=1): err= 0: pid=4191556: Tue Nov 19 11:04:54 2024 00:37:06.865 read: IOPS=531, BW=2126KiB/s (2177kB/s)(20.8MiB/10004msec) 00:37:06.865 slat (usec): min=4, max=104, avg=42.22, stdev=21.95 00:37:06.865 clat (usec): min=18007, max=59818, avg=29672.68, stdev=1484.16 00:37:06.865 lat (usec): min=18016, max=59832, avg=29714.90, stdev=1486.04 00:37:06.865 clat percentiles (usec): 00:37:06.865 | 1.00th=[24773], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:06.865 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:06.865 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.865 | 99.00th=[30802], 99.50th=[35914], 99.90th=[46924], 99.95th=[46924], 00:37:06.865 | 99.99th=[60031] 00:37:06.865 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2124.63, stdev=62.78, samples=19 00:37:06.865 iops : min= 512, max= 544, avg=531.16, stdev=15.70, samples=19 00:37:06.865 lat (msec) : 20=0.38%, 50=99.59%, 100=0.04% 00:37:06.865 cpu : usr=98.60%, sys=1.02%, ctx=10, majf=0, minf=9 00:37:06.865 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:06.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.865 issued rwts: total=5318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.865 filename0: (groupid=0, jobs=1): err= 0: pid=4191557: Tue Nov 19 11:04:54 2024 00:37:06.865 read: IOPS=535, BW=2141KiB/s (2193kB/s)(21.0MiB/10024msec) 00:37:06.865 slat (nsec): min=6827, max=92245, avg=37593.16, stdev=20483.47 00:37:06.865 clat (usec): min=7155, max=31329, avg=29517.92, stdev=2027.05 00:37:06.865 lat (usec): min=7162, max=31343, avg=29555.51, stdev=2030.61 00:37:06.865 clat percentiles (usec): 00:37:06.865 | 1.00th=[19268], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:06.865 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:06.865 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.865 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31327], 99.95th=[31327], 00:37:06.865 | 99.99th=[31327] 00:37:06.865 bw ( KiB/s): min= 2048, max= 2352, per=4.19%, avg=2140.00, stdev=79.39, samples=20 00:37:06.865 iops : min= 512, max= 588, avg=535.00, stdev=19.85, samples=20 00:37:06.865 lat (msec) : 10=0.58%, 20=0.54%, 50=98.88% 00:37:06.865 cpu : usr=98.52%, sys=1.10%, ctx=8, majf=0, minf=9 00:37:06.865 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:06.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 issued rwts: total=5366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.866 filename1: (groupid=0, jobs=1): err= 0: pid=4191558: Tue Nov 19 11:04:54 2024 00:37:06.866 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10023msec) 00:37:06.866 slat (nsec): min=7681, max=93493, avg=38521.09, stdev=20105.09 00:37:06.866 clat (usec): min=11892, max=31284, avg=29639.25, stdev=1295.13 00:37:06.866 lat (usec): min=11909, max=31306, avg=29677.77, stdev=1297.36 00:37:06.866 clat percentiles (usec): 00:37:06.866 | 1.00th=[28443], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:06.866 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:06.866 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.866 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31065], 99.95th=[31327], 00:37:06.866 | 99.99th=[31327] 00:37:06.866 bw ( KiB/s): min= 2048, max= 2180, per=4.17%, avg=2131.40, stdev=62.79, samples=20 00:37:06.866 iops : min= 512, max= 545, avg=532.85, stdev=15.70, samples=20 00:37:06.866 lat (msec) : 20=0.60%, 50=99.40% 00:37:06.866 cpu : usr=98.52%, sys=1.11%, ctx=5, majf=0, minf=9 00:37:06.866 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.866 filename1: (groupid=0, jobs=1): err= 0: pid=4191559: Tue Nov 19 11:04:54 2024 00:37:06.866 read: IOPS=530, BW=2124KiB/s (2174kB/s)(20.8MiB/10006msec) 00:37:06.866 slat (usec): min=9, max=102, avg=42.89, stdev=21.18 00:37:06.866 clat (usec): min=19618, max=45149, avg=29721.05, stdev=544.62 00:37:06.866 lat (usec): min=19628, max=45166, avg=29763.94, stdev=546.48 00:37:06.866 clat percentiles (usec): 00:37:06.866 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:06.866 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:06.866 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.866 | 99.00th=[30802], 99.50th=[31065], 99.90th=[34866], 99.95th=[34866], 00:37:06.866 | 99.99th=[45351] 00:37:06.866 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2122.11, stdev=64.93, samples=19 00:37:06.866 iops : min= 512, max= 544, avg=530.53, stdev=16.23, samples=19 00:37:06.866 lat (msec) : 20=0.04%, 50=99.96% 00:37:06.866 cpu : usr=98.76%, sys=0.87%, ctx=8, majf=0, minf=9 00:37:06.866 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.866 filename1: (groupid=0, jobs=1): err= 0: pid=4191560: Tue Nov 19 11:04:54 2024 00:37:06.866 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10008msec) 00:37:06.866 slat (nsec): min=4829, max=33836, avg=17210.76, stdev=4044.93 00:37:06.866 clat (usec): min=23830, max=41964, avg=29988.67, stdev=770.62 00:37:06.866 lat (usec): min=23844, max=41978, avg=30005.88, stdev=770.13 00:37:06.866 clat percentiles (usec): 00:37:06.866 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:37:06.866 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:37:06.866 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:06.866 | 99.00th=[31065], 99.50th=[31065], 99.90th=[41681], 99.95th=[42206], 00:37:06.866 | 99.99th=[42206] 00:37:06.866 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2122.32, stdev=64.68, samples=19 00:37:06.866 iops : min= 512, max= 544, avg=530.58, stdev=16.17, samples=19 00:37:06.866 lat (msec) : 50=100.00% 00:37:06.866 cpu : usr=98.56%, sys=1.06%, ctx=13, majf=0, minf=9 00:37:06.866 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.866 filename1: (groupid=0, jobs=1): err= 0: pid=4191561: Tue Nov 19 11:04:54 2024 00:37:06.866 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10023msec) 00:37:06.866 slat (nsec): min=7770, max=87856, avg=36904.01, stdev=19961.00 00:37:06.866 clat (usec): min=11919, max=31318, avg=29664.87, stdev=1295.97 00:37:06.866 lat (usec): min=11935, max=31332, avg=29701.78, stdev=1297.78 00:37:06.866 clat percentiles (usec): 00:37:06.866 | 1.00th=[28443], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:06.866 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:06.866 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.866 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31327], 99.95th=[31327], 00:37:06.866 | 99.99th=[31327] 00:37:06.866 bw ( KiB/s): min= 2048, max= 2180, per=4.17%, avg=2131.40, stdev=62.79, samples=20 00:37:06.866 iops : min= 512, max= 545, avg=532.85, stdev=15.70, samples=20 00:37:06.866 lat (msec) : 20=0.60%, 50=99.40% 00:37:06.866 cpu : usr=98.63%, sys=1.00%, ctx=10, majf=0, minf=9 00:37:06.866 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.866 filename1: (groupid=0, jobs=1): err= 0: pid=4191562: Tue Nov 19 11:04:54 2024 00:37:06.866 read: IOPS=534, BW=2138KiB/s (2189kB/s)(20.9MiB/10028msec) 00:37:06.866 slat (usec): min=7, max=104, avg=41.56, stdev=22.79 00:37:06.866 clat (usec): min=11101, max=31364, avg=29591.71, stdev=1752.79 00:37:06.866 lat (usec): min=11109, max=31380, avg=29633.27, stdev=1755.25 00:37:06.866 clat percentiles (usec): 00:37:06.866 | 1.00th=[19268], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:37:06.866 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:06.866 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.866 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:37:06.866 | 99.99th=[31327] 00:37:06.866 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2137.60, stdev=73.12, samples=20 00:37:06.866 iops : min= 512, max= 576, avg=534.40, stdev=18.28, samples=20 00:37:06.866 lat (msec) : 20=1.10%, 50=98.90% 00:37:06.866 cpu : usr=98.64%, sys=1.00%, ctx=9, majf=0, minf=9 00:37:06.866 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.866 filename1: (groupid=0, jobs=1): err= 0: pid=4191563: Tue Nov 19 11:04:54 2024 00:37:06.866 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10007msec) 00:37:06.866 slat (usec): min=6, max=104, avg=41.20, stdev=22.03 00:37:06.866 clat (usec): min=12684, max=63049, avg=29714.76, stdev=1674.25 00:37:06.866 lat (usec): min=12699, max=63063, avg=29755.96, stdev=1675.06 00:37:06.866 clat percentiles (usec): 00:37:06.866 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:06.866 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:06.866 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.866 | 99.00th=[30802], 99.50th=[31327], 99.90th=[52691], 99.95th=[52691], 00:37:06.866 | 99.99th=[63177] 00:37:06.866 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2118.40, stdev=77.42, samples=20 00:37:06.866 iops : min= 480, max= 544, avg=529.60, stdev=19.35, samples=20 00:37:06.866 lat (msec) : 20=0.32%, 50=99.38%, 100=0.30% 00:37:06.866 cpu : usr=98.37%, sys=1.23%, ctx=11, majf=0, minf=9 00:37:06.866 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.866 filename1: (groupid=0, jobs=1): err= 0: pid=4191564: Tue Nov 19 11:04:54 2024 00:37:06.866 read: IOPS=537, BW=2150KiB/s (2202kB/s)(21.0MiB/10007msec) 00:37:06.866 slat (usec): min=5, max=111, avg=30.51, stdev=23.73 00:37:06.866 clat (usec): min=12744, max=58969, avg=29504.37, stdev=3026.56 00:37:06.866 lat (usec): min=12759, max=58984, avg=29534.88, stdev=3025.61 00:37:06.866 clat percentiles (usec): 00:37:06.866 | 1.00th=[18220], 5.00th=[23462], 10.00th=[27132], 20.00th=[29492], 00:37:06.866 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:06.866 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[34866], 00:37:06.866 | 99.00th=[38011], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:37:06.866 | 99.99th=[58983] 00:37:06.866 bw ( KiB/s): min= 2027, max= 2240, per=4.20%, avg=2145.35, stdev=64.03, samples=20 00:37:06.866 iops : min= 506, max= 560, avg=536.30, stdev=16.08, samples=20 00:37:06.866 lat (msec) : 20=1.41%, 50=98.55%, 100=0.04% 00:37:06.866 cpu : usr=98.68%, sys=0.95%, ctx=9, majf=0, minf=11 00:37:06.866 IO depths : 1=3.2%, 2=6.5%, 4=14.1%, 8=65.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:37:06.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 complete : 0=0.0%, 4=91.5%, 8=4.6%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.866 issued rwts: total=5380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.866 filename1: (groupid=0, jobs=1): err= 0: pid=4191565: Tue Nov 19 11:04:54 2024 00:37:06.866 read: IOPS=534, BW=2139KiB/s (2190kB/s)(20.9MiB/10024msec) 00:37:06.867 slat (nsec): min=6921, max=76647, avg=28938.08, stdev=13107.67 00:37:06.867 clat (usec): min=7663, max=33807, avg=29684.82, stdev=1898.99 00:37:06.867 lat (usec): min=7675, max=33833, avg=29713.76, stdev=1899.65 00:37:06.867 clat percentiles (usec): 00:37:06.867 | 1.00th=[17171], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:37:06.867 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:06.867 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:37:06.867 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31327], 99.95th=[31327], 00:37:06.867 | 99.99th=[33817] 00:37:06.867 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2137.60, stdev=73.12, samples=20 00:37:06.867 iops : min= 512, max= 576, avg=534.40, stdev=18.28, samples=20 00:37:06.867 lat (msec) : 10=0.30%, 20=0.90%, 50=98.81% 00:37:06.867 cpu : usr=97.69%, sys=1.45%, ctx=206, majf=0, minf=9 00:37:06.867 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.867 filename2: (groupid=0, jobs=1): err= 0: pid=4191566: Tue Nov 19 11:04:54 2024 00:37:06.867 read: IOPS=530, BW=2124KiB/s (2174kB/s)(20.8MiB/10006msec) 00:37:06.867 slat (nsec): min=5427, max=93946, avg=35707.13, stdev=20799.64 00:37:06.867 clat (usec): min=15775, max=47470, avg=29764.91, stdev=1269.96 00:37:06.867 lat (usec): min=15783, max=47484, avg=29800.62, stdev=1270.74 00:37:06.867 clat percentiles (usec): 00:37:06.867 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:37:06.867 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:06.867 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.867 | 99.00th=[30802], 99.50th=[31065], 99.90th=[47449], 99.95th=[47449], 00:37:06.867 | 99.99th=[47449] 00:37:06.867 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2118.10, stdev=76.68, samples=20 00:37:06.867 iops : min= 480, max= 544, avg=529.45, stdev=19.25, samples=20 00:37:06.867 lat (msec) : 20=0.30%, 50=99.70% 00:37:06.867 cpu : usr=98.62%, sys=1.01%, ctx=13, majf=0, minf=9 00:37:06.867 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.867 filename2: (groupid=0, jobs=1): err= 0: pid=4191567: Tue Nov 19 11:04:54 2024 00:37:06.867 read: IOPS=530, BW=2124KiB/s (2174kB/s)(20.8MiB/10006msec) 00:37:06.867 slat (usec): min=7, max=100, avg=41.03, stdev=21.26 00:37:06.867 clat (usec): min=19646, max=53069, avg=29727.35, stdev=666.52 00:37:06.867 lat (usec): min=19662, max=53085, avg=29768.38, stdev=668.64 00:37:06.867 clat percentiles (usec): 00:37:06.867 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:06.867 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:06.867 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.867 | 99.00th=[30802], 99.50th=[31065], 99.90th=[34866], 99.95th=[34866], 00:37:06.867 | 99.99th=[53216] 00:37:06.867 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2122.11, stdev=64.93, samples=19 00:37:06.867 iops : min= 512, max= 544, avg=530.53, stdev=16.23, samples=19 00:37:06.867 lat (msec) : 20=0.08%, 50=99.89%, 100=0.04% 00:37:06.867 cpu : usr=98.66%, sys=0.95%, ctx=11, majf=0, minf=9 00:37:06.867 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.867 filename2: (groupid=0, jobs=1): err= 0: pid=4191568: Tue Nov 19 11:04:54 2024 00:37:06.867 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10008msec) 00:37:06.867 slat (nsec): min=5477, max=89779, avg=35391.31, stdev=20231.70 00:37:06.867 clat (usec): min=15734, max=48583, avg=29781.39, stdev=1309.52 00:37:06.867 lat (usec): min=15753, max=48597, avg=29816.78, stdev=1309.98 00:37:06.867 clat percentiles (usec): 00:37:06.867 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:37:06.867 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:06.867 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.867 | 99.00th=[30802], 99.50th=[31065], 99.90th=[48497], 99.95th=[48497], 00:37:06.867 | 99.99th=[48497] 00:37:06.867 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2117.95, stdev=77.09, samples=20 00:37:06.867 iops : min= 480, max= 544, avg=529.45, stdev=19.25, samples=20 00:37:06.867 lat (msec) : 20=0.30%, 50=99.70% 00:37:06.867 cpu : usr=98.64%, sys=0.98%, ctx=13, majf=0, minf=9 00:37:06.867 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.867 filename2: (groupid=0, jobs=1): err= 0: pid=4191569: Tue Nov 19 11:04:54 2024 00:37:06.867 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10008msec) 00:37:06.867 slat (nsec): min=5112, max=34567, avg=16781.18, stdev=4071.92 00:37:06.867 clat (usec): min=23882, max=42129, avg=29994.60, stdev=777.38 00:37:06.867 lat (usec): min=23895, max=42143, avg=30011.38, stdev=776.88 00:37:06.867 clat percentiles (usec): 00:37:06.867 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:37:06.867 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:37:06.867 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:06.867 | 99.00th=[31065], 99.50th=[31327], 99.90th=[42206], 99.95th=[42206], 00:37:06.867 | 99.99th=[42206] 00:37:06.867 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2122.32, stdev=64.68, samples=19 00:37:06.867 iops : min= 512, max= 544, avg=530.58, stdev=16.17, samples=19 00:37:06.867 lat (msec) : 50=100.00% 00:37:06.867 cpu : usr=98.57%, sys=1.05%, ctx=12, majf=0, minf=9 00:37:06.867 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.867 filename2: (groupid=0, jobs=1): err= 0: pid=4191570: Tue Nov 19 11:04:54 2024 00:37:06.867 read: IOPS=531, BW=2127KiB/s (2178kB/s)(20.8MiB/10018msec) 00:37:06.867 slat (nsec): min=7067, max=73931, avg=31434.04, stdev=12152.90 00:37:06.867 clat (usec): min=20043, max=31376, avg=29830.56, stdev=615.21 00:37:06.867 lat (usec): min=20051, max=31391, avg=29861.99, stdev=614.69 00:37:06.867 clat percentiles (usec): 00:37:06.867 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:37:06.867 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:06.867 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:37:06.867 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:37:06.867 | 99.99th=[31327] 00:37:06.867 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2124.80, stdev=64.34, samples=20 00:37:06.867 iops : min= 512, max= 544, avg=531.20, stdev=16.08, samples=20 00:37:06.867 lat (msec) : 50=100.00% 00:37:06.867 cpu : usr=98.42%, sys=1.03%, ctx=78, majf=0, minf=9 00:37:06.867 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.867 filename2: (groupid=0, jobs=1): err= 0: pid=4191571: Tue Nov 19 11:04:54 2024 00:37:06.867 read: IOPS=545, BW=2182KiB/s (2234kB/s)(21.3MiB/10015msec) 00:37:06.867 slat (usec): min=7, max=100, avg=14.44, stdev=13.17 00:37:06.867 clat (usec): min=1214, max=31393, avg=29210.06, stdev=4415.84 00:37:06.867 lat (usec): min=1227, max=31408, avg=29224.49, stdev=4415.89 00:37:06.867 clat percentiles (usec): 00:37:06.867 | 1.00th=[ 1532], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:37:06.867 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:37:06.867 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:37:06.867 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:37:06.867 | 99.99th=[31327] 00:37:06.867 bw ( KiB/s): min= 2048, max= 3256, per=4.26%, avg=2178.80, stdev=261.30, samples=20 00:37:06.867 iops : min= 512, max= 814, avg=544.70, stdev=65.33, samples=20 00:37:06.867 lat (msec) : 2=1.76%, 10=0.68%, 20=0.62%, 50=96.94% 00:37:06.867 cpu : usr=98.43%, sys=1.16%, ctx=35, majf=0, minf=9 00:37:06.867 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:06.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.867 issued rwts: total=5463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.867 filename2: (groupid=0, jobs=1): err= 0: pid=4191572: Tue Nov 19 11:04:54 2024 00:37:06.867 read: IOPS=534, BW=2138KiB/s (2189kB/s)(20.9MiB/10029msec) 00:37:06.867 slat (usec): min=6, max=211, avg=44.17, stdev=21.99 00:37:06.867 clat (usec): min=7533, max=40033, avg=29535.34, stdev=1838.41 00:37:06.867 lat (usec): min=7551, max=40060, avg=29579.51, stdev=1837.42 00:37:06.867 clat percentiles (usec): 00:37:06.867 | 1.00th=[19530], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:37:06.868 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:06.868 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.868 | 99.00th=[30540], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:37:06.868 | 99.99th=[40109] 00:37:06.868 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2137.60, stdev=73.12, samples=20 00:37:06.868 iops : min= 512, max= 576, avg=534.40, stdev=18.28, samples=20 00:37:06.868 lat (msec) : 10=0.30%, 20=0.90%, 50=98.81% 00:37:06.868 cpu : usr=98.86%, sys=0.77%, ctx=4, majf=0, minf=10 00:37:06.868 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.868 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.868 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.868 filename2: (groupid=0, jobs=1): err= 0: pid=4191573: Tue Nov 19 11:04:54 2024 00:37:06.868 read: IOPS=530, BW=2124KiB/s (2174kB/s)(20.8MiB/10006msec) 00:37:06.868 slat (usec): min=9, max=104, avg=42.08, stdev=21.81 00:37:06.868 clat (usec): min=12690, max=70452, avg=29705.82, stdev=1728.36 00:37:06.868 lat (usec): min=12704, max=70491, avg=29747.90, stdev=1730.11 00:37:06.868 clat percentiles (usec): 00:37:06.868 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:06.868 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:06.868 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:37:06.868 | 99.00th=[30802], 99.50th=[31065], 99.90th=[52167], 99.95th=[52167], 00:37:06.868 | 99.99th=[70779] 00:37:06.868 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2118.40, stdev=77.42, samples=20 00:37:06.868 iops : min= 480, max= 544, avg=529.60, stdev=19.35, samples=20 00:37:06.868 lat (msec) : 20=0.38%, 50=99.32%, 100=0.30% 00:37:06.868 cpu : usr=98.41%, sys=1.21%, ctx=6, majf=0, minf=9 00:37:06.868 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.868 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.868 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.868 00:37:06.868 Run status group 0 (all jobs): 00:37:06.868 READ: bw=49.9MiB/s (52.3MB/s), 2123KiB/s-2182KiB/s (2174kB/s-2234kB/s), io=500MiB (525MB), run=10004-10029msec 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 bdev_null0 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 [2024-11-19 11:04:55.206528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 bdev_null1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:06.868 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:06.869 { 00:37:06.869 "params": { 00:37:06.869 "name": "Nvme$subsystem", 00:37:06.869 "trtype": "$TEST_TRANSPORT", 00:37:06.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:06.869 "adrfam": "ipv4", 00:37:06.869 "trsvcid": "$NVMF_PORT", 00:37:06.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:06.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:06.869 "hdgst": ${hdgst:-false}, 00:37:06.869 "ddgst": ${ddgst:-false} 00:37:06.869 }, 00:37:06.869 "method": "bdev_nvme_attach_controller" 00:37:06.869 } 00:37:06.869 EOF 00:37:06.869 )") 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:06.869 { 00:37:06.869 "params": { 00:37:06.869 "name": "Nvme$subsystem", 00:37:06.869 "trtype": "$TEST_TRANSPORT", 00:37:06.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:06.869 "adrfam": "ipv4", 00:37:06.869 "trsvcid": "$NVMF_PORT", 00:37:06.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:06.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:06.869 "hdgst": ${hdgst:-false}, 00:37:06.869 "ddgst": ${ddgst:-false} 00:37:06.869 }, 00:37:06.869 "method": "bdev_nvme_attach_controller" 00:37:06.869 } 00:37:06.869 EOF 00:37:06.869 )") 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:06.869 "params": { 00:37:06.869 "name": "Nvme0", 00:37:06.869 "trtype": "tcp", 00:37:06.869 "traddr": "10.0.0.2", 00:37:06.869 "adrfam": "ipv4", 00:37:06.869 "trsvcid": "4420", 00:37:06.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:06.869 "hdgst": false, 00:37:06.869 "ddgst": false 00:37:06.869 }, 00:37:06.869 "method": "bdev_nvme_attach_controller" 00:37:06.869 },{ 00:37:06.869 "params": { 00:37:06.869 "name": "Nvme1", 00:37:06.869 "trtype": "tcp", 00:37:06.869 "traddr": "10.0.0.2", 00:37:06.869 "adrfam": "ipv4", 00:37:06.869 "trsvcid": "4420", 00:37:06.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:06.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:06.869 "hdgst": false, 00:37:06.869 "ddgst": false 00:37:06.869 }, 00:37:06.869 "method": "bdev_nvme_attach_controller" 00:37:06.869 }' 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:06.869 11:04:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:06.869 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:06.869 ... 00:37:06.869 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:06.869 ... 00:37:06.869 fio-3.35 00:37:06.869 Starting 4 threads 00:37:12.134 00:37:12.134 filename0: (groupid=0, jobs=1): err= 0: pid=4193522: Tue Nov 19 11:05:01 2024 00:37:12.134 read: IOPS=2819, BW=22.0MiB/s (23.1MB/s)(110MiB/5003msec) 00:37:12.134 slat (nsec): min=5976, max=58916, avg=11264.30, stdev=5901.30 00:37:12.134 clat (usec): min=543, max=5610, avg=2800.28, stdev=433.13 00:37:12.134 lat (usec): min=555, max=5624, avg=2811.54, stdev=433.57 00:37:12.134 clat percentiles (usec): 00:37:12.134 | 1.00th=[ 1663], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2442], 00:37:12.134 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2900], 00:37:12.134 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3261], 95.00th=[ 3458], 00:37:12.134 | 99.00th=[ 4015], 99.50th=[ 4293], 99.90th=[ 4948], 99.95th=[ 5014], 00:37:12.134 | 99.99th=[ 5276] 00:37:12.134 bw ( KiB/s): min=21040, max=23856, per=26.51%, avg=22668.44, stdev=812.22, samples=9 00:37:12.134 iops : min= 2630, max= 2982, avg=2833.56, stdev=101.53, samples=9 00:37:12.134 lat (usec) : 750=0.03%, 1000=0.04% 00:37:12.134 lat (msec) : 2=2.80%, 4=96.13%, 10=1.01% 00:37:12.134 cpu : usr=95.92%, sys=3.72%, ctx=15, majf=0, minf=9 00:37:12.134 IO depths : 1=0.6%, 2=12.1%, 4=60.2%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:12.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.134 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.134 issued rwts: total=14105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.134 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:12.134 filename0: (groupid=0, jobs=1): err= 0: pid=4193523: Tue Nov 19 11:05:01 2024 00:37:12.134 read: IOPS=2667, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:37:12.134 slat (nsec): min=6013, max=57914, avg=11687.34, stdev=5524.05 00:37:12.134 clat (usec): min=624, max=5954, avg=2962.79, stdev=511.70 00:37:12.134 lat (usec): min=634, max=5961, avg=2974.48, stdev=511.66 00:37:12.134 clat percentiles (usec): 00:37:12.134 | 1.00th=[ 1795], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2606], 00:37:12.134 | 30.00th=[ 2769], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 2999], 00:37:12.134 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3556], 95.00th=[ 3949], 00:37:12.134 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5538], 99.95th=[ 5604], 00:37:12.134 | 99.99th=[ 5932] 00:37:12.134 bw ( KiB/s): min=20400, max=22336, per=24.92%, avg=21310.67, stdev=740.34, samples=9 00:37:12.134 iops : min= 2550, max= 2792, avg=2663.78, stdev=92.52, samples=9 00:37:12.134 lat (usec) : 750=0.04%, 1000=0.01% 00:37:12.134 lat (msec) : 2=1.79%, 4=93.91%, 10=4.25% 00:37:12.134 cpu : usr=96.52%, sys=3.12%, ctx=9, majf=0, minf=9 00:37:12.134 IO depths : 1=0.2%, 2=9.1%, 4=61.9%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:12.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.134 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.134 issued rwts: total=13338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.134 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:12.134 filename1: (groupid=0, jobs=1): err= 0: pid=4193524: Tue Nov 19 11:05:01 2024 00:37:12.134 read: IOPS=2568, BW=20.1MiB/s (21.0MB/s)(100MiB/5001msec) 00:37:12.134 slat (nsec): min=6000, max=57977, avg=11167.19, stdev=5381.57 00:37:12.134 clat (usec): min=530, max=5970, avg=3081.38, stdev=524.99 00:37:12.134 lat (usec): min=541, max=5976, avg=3092.55, stdev=524.63 00:37:12.134 clat percentiles (usec): 00:37:12.134 | 1.00th=[ 1893], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2769], 00:37:12.134 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3064], 00:37:12.134 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3720], 95.00th=[ 4146], 00:37:12.134 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5473], 99.95th=[ 5538], 00:37:12.134 | 99.99th=[ 5932] 00:37:12.134 bw ( KiB/s): min=19440, max=21888, per=23.99%, avg=20510.22, stdev=782.63, samples=9 00:37:12.134 iops : min= 2430, max= 2736, avg=2563.78, stdev=97.83, samples=9 00:37:12.134 lat (usec) : 750=0.02%, 1000=0.07% 00:37:12.134 lat (msec) : 2=1.12%, 4=92.16%, 10=6.62% 00:37:12.134 cpu : usr=96.68%, sys=2.98%, ctx=11, majf=0, minf=9 00:37:12.134 IO depths : 1=0.4%, 2=5.4%, 4=66.0%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:12.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.134 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.134 issued rwts: total=12846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.134 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:12.134 filename1: (groupid=0, jobs=1): err= 0: pid=4193525: Tue Nov 19 11:05:01 2024 00:37:12.134 read: IOPS=2635, BW=20.6MiB/s (21.6MB/s)(103MiB/5002msec) 00:37:12.134 slat (nsec): min=5956, max=58007, avg=11188.20, stdev=5297.96 00:37:12.134 clat (usec): min=1124, max=5698, avg=3001.92, stdev=499.64 00:37:12.134 lat (usec): min=1135, max=5705, avg=3013.11, stdev=499.51 00:37:12.134 clat percentiles (usec): 00:37:12.134 | 1.00th=[ 1795], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2671], 00:37:12.134 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2999], 00:37:12.134 | 70.00th=[ 3130], 80.00th=[ 3294], 90.00th=[ 3621], 95.00th=[ 3982], 00:37:12.134 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5211], 99.95th=[ 5342], 00:37:12.134 | 99.99th=[ 5669] 00:37:12.134 bw ( KiB/s): min=20000, max=21808, per=24.59%, avg=21022.22, stdev=538.53, samples=9 00:37:12.134 iops : min= 2500, max= 2726, avg=2627.78, stdev=67.32, samples=9 00:37:12.134 lat (msec) : 2=1.59%, 4=93.81%, 10=4.61% 00:37:12.134 cpu : usr=96.72%, sys=2.92%, ctx=14, majf=0, minf=9 00:37:12.134 IO depths : 1=0.8%, 2=5.6%, 4=65.9%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:12.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.134 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.134 issued rwts: total=13181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.134 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:12.134 00:37:12.134 Run status group 0 (all jobs): 00:37:12.134 READ: bw=83.5MiB/s (87.6MB/s), 20.1MiB/s-22.0MiB/s (21.0MB/s-23.1MB/s), io=418MiB (438MB), run=5001-5003msec 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.134 00:37:12.134 real 0m24.439s 00:37:12.134 user 4m52.338s 00:37:12.134 sys 0m5.057s 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:12.134 11:05:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:12.134 ************************************ 00:37:12.134 END TEST fio_dif_rand_params 00:37:12.134 ************************************ 00:37:12.134 11:05:01 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:12.134 11:05:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:12.134 11:05:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:12.134 11:05:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:12.134 ************************************ 00:37:12.134 START TEST fio_dif_digest 00:37:12.134 ************************************ 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:12.134 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:12.135 bdev_null0 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:12.135 [2024-11-19 11:05:01.739814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:12.135 { 00:37:12.135 "params": { 00:37:12.135 "name": "Nvme$subsystem", 00:37:12.135 "trtype": "$TEST_TRANSPORT", 00:37:12.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:12.135 "adrfam": "ipv4", 00:37:12.135 "trsvcid": "$NVMF_PORT", 00:37:12.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:12.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:12.135 "hdgst": ${hdgst:-false}, 00:37:12.135 "ddgst": ${ddgst:-false} 00:37:12.135 }, 00:37:12.135 "method": "bdev_nvme_attach_controller" 00:37:12.135 } 00:37:12.135 EOF 00:37:12.135 )") 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:12.135 "params": { 00:37:12.135 "name": "Nvme0", 00:37:12.135 "trtype": "tcp", 00:37:12.135 "traddr": "10.0.0.2", 00:37:12.135 "adrfam": "ipv4", 00:37:12.135 "trsvcid": "4420", 00:37:12.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:12.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:12.135 "hdgst": true, 00:37:12.135 "ddgst": true 00:37:12.135 }, 00:37:12.135 "method": "bdev_nvme_attach_controller" 00:37:12.135 }' 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:12.135 11:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.392 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:12.392 ... 00:37:12.392 fio-3.35 00:37:12.392 Starting 3 threads 00:37:24.587 00:37:24.587 filename0: (groupid=0, jobs=1): err= 0: pid=861: Tue Nov 19 11:05:12 2024 00:37:24.587 read: IOPS=299, BW=37.5MiB/s (39.3MB/s)(377MiB/10048msec) 00:37:24.587 slat (nsec): min=6518, max=58731, avg=22595.12, stdev=5989.85 00:37:24.587 clat (usec): min=7644, max=49569, avg=9965.50, stdev=1184.00 00:37:24.587 lat (usec): min=7666, max=49592, avg=9988.09, stdev=1183.76 00:37:24.587 clat percentiles (usec): 00:37:24.587 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:37:24.587 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:37:24.587 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:37:24.587 | 99.00th=[11469], 99.50th=[11731], 99.90th=[12911], 99.95th=[47449], 00:37:24.587 | 99.99th=[49546] 00:37:24.587 bw ( KiB/s): min=37888, max=39424, per=35.25%, avg=38540.80, stdev=375.83, samples=20 00:37:24.587 iops : min= 296, max= 308, avg=301.10, stdev= 2.94, samples=20 00:37:24.587 lat (msec) : 10=51.88%, 20=48.06%, 50=0.07% 00:37:24.587 cpu : usr=93.74%, sys=4.47%, ctx=306, majf=0, minf=90 00:37:24.587 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.587 issued rwts: total=3013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.588 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.588 filename0: (groupid=0, jobs=1): err= 0: pid=862: Tue Nov 19 11:05:12 2024 00:37:24.588 read: IOPS=275, BW=34.5MiB/s (36.1MB/s)(346MiB/10047msec) 00:37:24.588 slat (nsec): min=6276, max=64562, avg=17914.40, stdev=7608.89 00:37:24.588 clat (usec): min=8433, max=49509, avg=10841.09, stdev=1238.98 00:37:24.588 lat (usec): min=8457, max=49524, avg=10859.01, stdev=1239.21 00:37:24.588 clat percentiles (usec): 00:37:24.588 | 1.00th=[ 9241], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:37:24.588 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:37:24.588 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:37:24.588 | 99.00th=[12780], 99.50th=[13042], 99.90th=[14746], 99.95th=[47449], 00:37:24.588 | 99.99th=[49546] 00:37:24.588 bw ( KiB/s): min=34304, max=36608, per=32.42%, avg=35443.20, stdev=672.07, samples=20 00:37:24.588 iops : min= 268, max= 286, avg=276.90, stdev= 5.25, samples=20 00:37:24.588 lat (msec) : 10=10.86%, 20=89.07%, 50=0.07% 00:37:24.588 cpu : usr=96.54%, sys=3.16%, ctx=17, majf=0, minf=118 00:37:24.588 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.588 issued rwts: total=2771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.588 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.588 filename0: (groupid=0, jobs=1): err= 0: pid=863: Tue Nov 19 11:05:12 2024 00:37:24.588 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(350MiB/10048msec) 00:37:24.588 slat (nsec): min=6211, max=55511, avg=18473.76, stdev=7576.72 00:37:24.588 clat (usec): min=8269, max=51168, avg=10737.68, stdev=1277.49 00:37:24.588 lat (usec): min=8280, max=51194, avg=10756.15, stdev=1277.79 00:37:24.588 clat percentiles (usec): 00:37:24.588 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10159], 00:37:24.588 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:37:24.588 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11600], 95.00th=[11863], 00:37:24.588 | 99.00th=[12518], 99.50th=[12780], 99.90th=[15533], 99.95th=[49021], 00:37:24.588 | 99.99th=[51119] 00:37:24.588 bw ( KiB/s): min=34560, max=37120, per=32.74%, avg=35788.80, stdev=763.95, samples=20 00:37:24.588 iops : min= 270, max= 290, avg=279.60, stdev= 5.97, samples=20 00:37:24.588 lat (msec) : 10=14.72%, 20=85.20%, 50=0.04%, 100=0.04% 00:37:24.588 cpu : usr=96.47%, sys=3.20%, ctx=15, majf=0, minf=127 00:37:24.588 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.588 issued rwts: total=2798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.588 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.588 00:37:24.588 Run status group 0 (all jobs): 00:37:24.588 READ: bw=107MiB/s (112MB/s), 34.5MiB/s-37.5MiB/s (36.1MB/s-39.3MB/s), io=1073MiB (1125MB), run=10047-10048msec 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.588 00:37:24.588 real 0m11.283s 00:37:24.588 user 0m35.360s 00:37:24.588 sys 0m1.449s 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:24.588 11:05:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:24.588 ************************************ 00:37:24.588 END TEST fio_dif_digest 00:37:24.588 ************************************ 00:37:24.588 11:05:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:24.588 11:05:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:24.588 rmmod nvme_tcp 00:37:24.588 rmmod nvme_fabrics 00:37:24.588 rmmod nvme_keyring 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 4186198 ']' 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 4186198 00:37:24.588 11:05:13 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 4186198 ']' 00:37:24.588 11:05:13 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 4186198 00:37:24.588 11:05:13 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:24.588 11:05:13 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:24.588 11:05:13 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4186198 00:37:24.588 11:05:13 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:24.588 11:05:13 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:24.588 11:05:13 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4186198' 00:37:24.588 killing process with pid 4186198 00:37:24.588 11:05:13 nvmf_dif -- common/autotest_common.sh@973 -- # kill 4186198 00:37:24.588 11:05:13 nvmf_dif -- common/autotest_common.sh@978 -- # wait 4186198 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:24.588 11:05:13 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:26.493 Waiting for block devices as requested 00:37:26.493 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:26.493 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:26.493 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:26.493 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:26.752 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:26.752 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:26.752 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:27.012 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:27.012 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:27.012 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:27.012 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:27.271 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:27.271 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:27.271 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:27.529 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:27.529 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:27.529 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:27.788 11:05:17 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:27.788 11:05:17 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:27.788 11:05:17 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:27.788 11:05:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:27.788 11:05:17 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:27.788 11:05:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:27.788 11:05:17 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:27.788 11:05:17 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:27.788 11:05:17 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.788 11:05:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:27.788 11:05:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.691 11:05:19 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:29.691 00:37:29.691 real 1m14.418s 00:37:29.691 user 7m10.306s 00:37:29.691 sys 0m20.599s 00:37:29.691 11:05:19 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:29.691 11:05:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:29.691 ************************************ 00:37:29.691 END TEST nvmf_dif 00:37:29.691 ************************************ 00:37:29.691 11:05:19 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:29.691 11:05:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:29.691 11:05:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:29.691 11:05:19 -- common/autotest_common.sh@10 -- # set +x 00:37:29.949 ************************************ 00:37:29.949 START TEST nvmf_abort_qd_sizes 00:37:29.949 ************************************ 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:29.950 * Looking for test storage... 00:37:29.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:29.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.950 --rc genhtml_branch_coverage=1 00:37:29.950 --rc genhtml_function_coverage=1 00:37:29.950 --rc genhtml_legend=1 00:37:29.950 --rc geninfo_all_blocks=1 00:37:29.950 --rc geninfo_unexecuted_blocks=1 00:37:29.950 00:37:29.950 ' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:29.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.950 --rc genhtml_branch_coverage=1 00:37:29.950 --rc genhtml_function_coverage=1 00:37:29.950 --rc genhtml_legend=1 00:37:29.950 --rc geninfo_all_blocks=1 00:37:29.950 --rc geninfo_unexecuted_blocks=1 00:37:29.950 00:37:29.950 ' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:29.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.950 --rc genhtml_branch_coverage=1 00:37:29.950 --rc genhtml_function_coverage=1 00:37:29.950 --rc genhtml_legend=1 00:37:29.950 --rc geninfo_all_blocks=1 00:37:29.950 --rc geninfo_unexecuted_blocks=1 00:37:29.950 00:37:29.950 ' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:29.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.950 --rc genhtml_branch_coverage=1 00:37:29.950 --rc genhtml_function_coverage=1 00:37:29.950 --rc genhtml_legend=1 00:37:29.950 --rc geninfo_all_blocks=1 00:37:29.950 --rc geninfo_unexecuted_blocks=1 00:37:29.950 00:37:29.950 ' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:29.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:29.950 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:29.951 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:29.951 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.951 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:29.951 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.951 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:29.951 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:29.951 11:05:19 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:29.951 11:05:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:36.520 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:36.520 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:36.520 Found net devices under 0000:86:00.0: cvl_0_0 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:36.520 Found net devices under 0000:86:00.1: cvl_0_1 00:37:36.520 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:36.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:36.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:37:36.521 00:37:36.521 --- 10.0.0.2 ping statistics --- 00:37:36.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:36.521 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:36.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:36.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:37:36.521 00:37:36.521 --- 10.0.0.1 ping statistics --- 00:37:36.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:36.521 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:36.521 11:05:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:39.054 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:39.054 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:40.465 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=9157 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 9157 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 9157 ']' 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:40.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:40.465 11:05:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:40.465 [2024-11-19 11:05:30.046054] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:37:40.465 [2024-11-19 11:05:30.046106] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:40.465 [2024-11-19 11:05:30.127898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:40.465 [2024-11-19 11:05:30.171619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:40.465 [2024-11-19 11:05:30.171658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:40.465 [2024-11-19 11:05:30.171665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:40.465 [2024-11-19 11:05:30.171671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:40.465 [2024-11-19 11:05:30.171676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:40.465 [2024-11-19 11:05:30.173310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.465 [2024-11-19 11:05:30.173421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:40.465 [2024-11-19 11:05:30.173529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.465 [2024-11-19 11:05:30.173530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:40.743 11:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:40.743 ************************************ 00:37:40.743 START TEST spdk_target_abort 00:37:40.743 ************************************ 00:37:40.743 11:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:40.743 11:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:40.743 11:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:37:40.743 11:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.743 11:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:44.033 spdk_targetn1 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:44.033 [2024-11-19 11:05:33.181211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:44.033 [2024-11-19 11:05:33.235325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:44.033 11:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:47.313 Initializing NVMe Controllers 00:37:47.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:47.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:47.313 Initialization complete. Launching workers. 00:37:47.313 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15689, failed: 0 00:37:47.313 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1392, failed to submit 14297 00:37:47.313 success 700, unsuccessful 692, failed 0 00:37:47.313 11:05:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:47.313 11:05:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:50.592 Initializing NVMe Controllers 00:37:50.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:50.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:50.592 Initialization complete. Launching workers. 00:37:50.592 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8691, failed: 0 00:37:50.592 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1268, failed to submit 7423 00:37:50.592 success 314, unsuccessful 954, failed 0 00:37:50.592 11:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:50.592 11:05:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:53.873 Initializing NVMe Controllers 00:37:53.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:53.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:53.873 Initialization complete. Launching workers. 00:37:53.873 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39174, failed: 0 00:37:53.873 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2807, failed to submit 36367 00:37:53.873 success 593, unsuccessful 2214, failed 0 00:37:53.873 11:05:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:53.873 11:05:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.873 11:05:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:53.873 11:05:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.873 11:05:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:53.873 11:05:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.873 11:05:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.246 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.246 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 9157 00:37:55.246 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 9157 ']' 00:37:55.246 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 9157 00:37:55.246 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:55.246 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:55.246 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 9157 00:37:55.246 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:55.246 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:55.246 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 9157' 00:37:55.246 killing process with pid 9157 00:37:55.246 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 9157 00:37:55.247 11:05:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 9157 00:37:55.506 00:37:55.506 real 0m14.753s 00:37:55.506 user 0m56.245s 00:37:55.506 sys 0m2.626s 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.506 ************************************ 00:37:55.506 END TEST spdk_target_abort 00:37:55.506 ************************************ 00:37:55.506 11:05:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:55.506 11:05:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:55.506 11:05:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:55.506 11:05:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:55.506 ************************************ 00:37:55.506 START TEST kernel_target_abort 00:37:55.506 ************************************ 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:55.506 11:05:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:58.800 Waiting for block devices as requested 00:37:58.800 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:58.800 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:58.800 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:58.800 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:58.800 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:58.800 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:58.800 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:58.800 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:59.059 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:59.059 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:59.059 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:59.059 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:59.318 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:59.318 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:59.318 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:59.580 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:59.580 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:59.580 No valid GPT data, bailing 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:59.580 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:37:59.838 00:37:59.838 Discovery Log Number of Records 2, Generation counter 2 00:37:59.838 =====Discovery Log Entry 0====== 00:37:59.838 trtype: tcp 00:37:59.838 adrfam: ipv4 00:37:59.838 subtype: current discovery subsystem 00:37:59.838 treq: not specified, sq flow control disable supported 00:37:59.838 portid: 1 00:37:59.838 trsvcid: 4420 00:37:59.838 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:59.838 traddr: 10.0.0.1 00:37:59.838 eflags: none 00:37:59.838 sectype: none 00:37:59.838 =====Discovery Log Entry 1====== 00:37:59.838 trtype: tcp 00:37:59.838 adrfam: ipv4 00:37:59.838 subtype: nvme subsystem 00:37:59.838 treq: not specified, sq flow control disable supported 00:37:59.838 portid: 1 00:37:59.838 trsvcid: 4420 00:37:59.838 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:59.838 traddr: 10.0.0.1 00:37:59.838 eflags: none 00:37:59.838 sectype: none 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:59.838 11:05:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:03.119 Initializing NVMe Controllers 00:38:03.119 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:03.119 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:03.119 Initialization complete. Launching workers. 00:38:03.119 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95216, failed: 0 00:38:03.119 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95216, failed to submit 0 00:38:03.119 success 0, unsuccessful 95216, failed 0 00:38:03.119 11:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:03.119 11:05:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:06.401 Initializing NVMe Controllers 00:38:06.401 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:06.401 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:06.401 Initialization complete. Launching workers. 00:38:06.401 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 151716, failed: 0 00:38:06.401 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38326, failed to submit 113390 00:38:06.401 success 0, unsuccessful 38326, failed 0 00:38:06.401 11:05:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:06.401 11:05:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:09.684 Initializing NVMe Controllers 00:38:09.684 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:09.684 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:09.684 Initialization complete. Launching workers. 00:38:09.684 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142577, failed: 0 00:38:09.684 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35706, failed to submit 106871 00:38:09.684 success 0, unsuccessful 35706, failed 0 00:38:09.684 11:05:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:09.684 11:05:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:09.684 11:05:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:09.684 11:05:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:09.684 11:05:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:09.684 11:05:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:09.684 11:05:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:09.684 11:05:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:09.684 11:05:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:09.684 11:05:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:12.220 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:12.220 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:12.220 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:12.220 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:12.220 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:12.220 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:12.220 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:12.220 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:12.221 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:12.221 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:12.221 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:12.221 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:12.221 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:12.221 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:12.221 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:12.221 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:13.600 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:13.600 00:38:13.600 real 0m18.073s 00:38:13.600 user 0m9.186s 00:38:13.600 sys 0m5.045s 00:38:13.600 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:13.600 11:06:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:13.600 ************************************ 00:38:13.600 END TEST kernel_target_abort 00:38:13.600 ************************************ 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:13.600 rmmod nvme_tcp 00:38:13.600 rmmod nvme_fabrics 00:38:13.600 rmmod nvme_keyring 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 9157 ']' 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 9157 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 9157 ']' 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 9157 00:38:13.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (9157) - No such process 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 9157 is not found' 00:38:13.600 Process with pid 9157 is not found 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:13.600 11:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:16.894 Waiting for block devices as requested 00:38:16.894 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:16.894 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:16.894 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:16.894 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:16.894 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:16.894 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:16.894 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:16.894 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:17.153 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:17.153 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:17.153 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:17.412 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:17.412 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:17.412 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:17.671 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:17.671 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:17.671 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:17.930 11:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:17.930 11:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:17.930 11:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:17.930 11:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:17.930 11:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:17.930 11:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:17.930 11:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:17.930 11:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:17.930 11:06:07 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:17.930 11:06:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:17.930 11:06:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.835 11:06:09 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:19.835 00:38:19.835 real 0m50.047s 00:38:19.835 user 1m9.753s 00:38:19.835 sys 0m16.493s 00:38:19.835 11:06:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:19.835 11:06:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:19.835 ************************************ 00:38:19.835 END TEST nvmf_abort_qd_sizes 00:38:19.835 ************************************ 00:38:19.835 11:06:09 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:19.835 11:06:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:19.835 11:06:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:19.835 11:06:09 -- common/autotest_common.sh@10 -- # set +x 00:38:19.835 ************************************ 00:38:19.835 START TEST keyring_file 00:38:19.835 ************************************ 00:38:19.835 11:06:09 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:20.095 * Looking for test storage... 00:38:20.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:20.095 11:06:09 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:20.095 11:06:09 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:38:20.095 11:06:09 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:20.095 11:06:09 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:20.095 11:06:09 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:20.095 11:06:09 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:20.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.095 --rc genhtml_branch_coverage=1 00:38:20.095 --rc genhtml_function_coverage=1 00:38:20.095 --rc genhtml_legend=1 00:38:20.095 --rc geninfo_all_blocks=1 00:38:20.095 --rc geninfo_unexecuted_blocks=1 00:38:20.095 00:38:20.095 ' 00:38:20.095 11:06:09 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:20.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.095 --rc genhtml_branch_coverage=1 00:38:20.095 --rc genhtml_function_coverage=1 00:38:20.095 --rc genhtml_legend=1 00:38:20.095 --rc geninfo_all_blocks=1 00:38:20.095 --rc geninfo_unexecuted_blocks=1 00:38:20.095 00:38:20.095 ' 00:38:20.095 11:06:09 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:20.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.095 --rc genhtml_branch_coverage=1 00:38:20.095 --rc genhtml_function_coverage=1 00:38:20.095 --rc genhtml_legend=1 00:38:20.095 --rc geninfo_all_blocks=1 00:38:20.095 --rc geninfo_unexecuted_blocks=1 00:38:20.095 00:38:20.095 ' 00:38:20.095 11:06:09 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:20.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.095 --rc genhtml_branch_coverage=1 00:38:20.095 --rc genhtml_function_coverage=1 00:38:20.095 --rc genhtml_legend=1 00:38:20.095 --rc geninfo_all_blocks=1 00:38:20.095 --rc geninfo_unexecuted_blocks=1 00:38:20.095 00:38:20.095 ' 00:38:20.095 11:06:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:20.095 11:06:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:20.095 11:06:09 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:20.095 11:06:09 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:20.096 11:06:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.096 11:06:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.096 11:06:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.096 11:06:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:20.096 11:06:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:20.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:20.096 11:06:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:20.096 11:06:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:20.096 11:06:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:20.096 11:06:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:20.096 11:06:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:20.096 11:06:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mlMAHlCkMw 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:20.096 11:06:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mlMAHlCkMw 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mlMAHlCkMw 00:38:20.096 11:06:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.mlMAHlCkMw 00:38:20.096 11:06:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:20.096 11:06:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:20.355 11:06:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eSmu3IZwqR 00:38:20.355 11:06:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:20.355 11:06:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:20.355 11:06:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:20.355 11:06:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:20.355 11:06:09 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:20.355 11:06:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:20.355 11:06:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:20.355 11:06:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eSmu3IZwqR 00:38:20.355 11:06:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eSmu3IZwqR 00:38:20.355 11:06:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.eSmu3IZwqR 00:38:20.355 11:06:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=18422 00:38:20.355 11:06:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:20.355 11:06:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 18422 00:38:20.355 11:06:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 18422 ']' 00:38:20.355 11:06:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:20.355 11:06:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:20.355 11:06:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:20.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:20.355 11:06:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:20.355 11:06:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:20.355 [2024-11-19 11:06:09.982129] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:38:20.355 [2024-11-19 11:06:09.982177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid18422 ] 00:38:20.355 [2024-11-19 11:06:10.058037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.355 [2024-11-19 11:06:10.104368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:20.614 11:06:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:20.614 [2024-11-19 11:06:10.308348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:20.614 null0 00:38:20.614 [2024-11-19 11:06:10.340400] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:20.614 [2024-11-19 11:06:10.340770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.614 11:06:10 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:20.614 [2024-11-19 11:06:10.368462] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:20.614 request: 00:38:20.614 { 00:38:20.614 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:20.614 "secure_channel": false, 00:38:20.614 "listen_address": { 00:38:20.614 "trtype": "tcp", 00:38:20.614 "traddr": "127.0.0.1", 00:38:20.614 "trsvcid": "4420" 00:38:20.614 }, 00:38:20.614 "method": "nvmf_subsystem_add_listener", 00:38:20.614 "req_id": 1 00:38:20.614 } 00:38:20.614 Got JSON-RPC error response 00:38:20.614 response: 00:38:20.614 { 00:38:20.614 "code": -32602, 00:38:20.614 "message": "Invalid parameters" 00:38:20.614 } 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:20.614 11:06:10 keyring_file -- keyring/file.sh@47 -- # bperfpid=18463 00:38:20.614 11:06:10 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:20.614 11:06:10 keyring_file -- keyring/file.sh@49 -- # waitforlisten 18463 /var/tmp/bperf.sock 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 18463 ']' 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:20.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:20.614 11:06:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:20.872 [2024-11-19 11:06:10.420526] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:38:20.873 [2024-11-19 11:06:10.420568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid18463 ] 00:38:20.873 [2024-11-19 11:06:10.476947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.873 [2024-11-19 11:06:10.518320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:20.873 11:06:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:20.873 11:06:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:20.873 11:06:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mlMAHlCkMw 00:38:20.873 11:06:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mlMAHlCkMw 00:38:21.131 11:06:10 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eSmu3IZwqR 00:38:21.131 11:06:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eSmu3IZwqR 00:38:21.390 11:06:11 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:21.390 11:06:11 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:21.390 11:06:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.390 11:06:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.390 11:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.649 11:06:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.mlMAHlCkMw == \/\t\m\p\/\t\m\p\.\m\l\M\A\H\l\C\k\M\w ]] 00:38:21.649 11:06:11 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:21.649 11:06:11 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:21.649 11:06:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.649 11:06:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:21.649 11:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.649 11:06:11 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.eSmu3IZwqR == \/\t\m\p\/\t\m\p\.\e\S\m\u\3\I\Z\w\q\R ]] 00:38:21.649 11:06:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:21.649 11:06:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.649 11:06:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:21.649 11:06:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.649 11:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.649 11:06:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.907 11:06:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:21.907 11:06:11 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:21.907 11:06:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.907 11:06:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:21.907 11:06:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.908 11:06:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:21.908 11:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.166 11:06:11 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:22.166 11:06:11 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:22.166 11:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:22.424 [2024-11-19 11:06:11.983889] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:22.424 nvme0n1 00:38:22.424 11:06:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:22.424 11:06:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:22.424 11:06:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.424 11:06:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.424 11:06:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.424 11:06:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:22.683 11:06:12 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:22.683 11:06:12 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:22.683 11:06:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.683 11:06:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:22.683 11:06:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.683 11:06:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:22.683 11:06:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.683 11:06:12 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:22.683 11:06:12 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:22.941 Running I/O for 1 seconds... 00:38:23.876 19412.00 IOPS, 75.83 MiB/s 00:38:23.876 Latency(us) 00:38:23.876 [2024-11-19T10:06:13.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.876 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:23.876 nvme0n1 : 1.00 19464.29 76.03 0.00 0.00 6564.91 2356.18 10048.85 00:38:23.876 [2024-11-19T10:06:13.668Z] =================================================================================================================== 00:38:23.876 [2024-11-19T10:06:13.668Z] Total : 19464.29 76.03 0.00 0.00 6564.91 2356.18 10048.85 00:38:23.876 { 00:38:23.876 "results": [ 00:38:23.876 { 00:38:23.876 "job": "nvme0n1", 00:38:23.876 "core_mask": "0x2", 00:38:23.876 "workload": "randrw", 00:38:23.876 "percentage": 50, 00:38:23.876 "status": "finished", 00:38:23.876 "queue_depth": 128, 00:38:23.876 "io_size": 4096, 00:38:23.876 "runtime": 1.003941, 00:38:23.876 "iops": 19464.29122826939, 00:38:23.876 "mibps": 76.0323876104273, 00:38:23.876 "io_failed": 0, 00:38:23.876 "io_timeout": 0, 00:38:23.876 "avg_latency_us": 6564.91018668928, 00:38:23.876 "min_latency_us": 2356.175238095238, 00:38:23.876 "max_latency_us": 10048.853333333333 00:38:23.876 } 00:38:23.876 ], 00:38:23.876 "core_count": 1 00:38:23.876 } 00:38:23.876 11:06:13 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:23.876 11:06:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:24.134 11:06:13 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:24.134 11:06:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.134 11:06:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.134 11:06:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.134 11:06:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.134 11:06:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.392 11:06:13 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:24.392 11:06:13 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:24.392 11:06:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:24.392 11:06:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.392 11:06:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.392 11:06:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:24.392 11:06:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.392 11:06:14 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:24.392 11:06:14 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.392 11:06:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:24.392 11:06:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.392 11:06:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:24.392 11:06:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:24.392 11:06:14 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:24.392 11:06:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:24.392 11:06:14 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.392 11:06:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.651 [2024-11-19 11:06:14.352280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:24.651 [2024-11-19 11:06:14.352479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1373d00 (107): Transport endpoint is not connected 00:38:24.651 [2024-11-19 11:06:14.353474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1373d00 (9): Bad file descriptor 00:38:24.651 [2024-11-19 11:06:14.354476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:24.651 [2024-11-19 11:06:14.354485] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:24.651 [2024-11-19 11:06:14.354492] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:24.651 [2024-11-19 11:06:14.354500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:24.651 request: 00:38:24.651 { 00:38:24.651 "name": "nvme0", 00:38:24.651 "trtype": "tcp", 00:38:24.651 "traddr": "127.0.0.1", 00:38:24.651 "adrfam": "ipv4", 00:38:24.651 "trsvcid": "4420", 00:38:24.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.651 "prchk_reftag": false, 00:38:24.651 "prchk_guard": false, 00:38:24.651 "hdgst": false, 00:38:24.651 "ddgst": false, 00:38:24.651 "psk": "key1", 00:38:24.651 "allow_unrecognized_csi": false, 00:38:24.651 "method": "bdev_nvme_attach_controller", 00:38:24.651 "req_id": 1 00:38:24.651 } 00:38:24.651 Got JSON-RPC error response 00:38:24.651 response: 00:38:24.651 { 00:38:24.651 "code": -5, 00:38:24.651 "message": "Input/output error" 00:38:24.651 } 00:38:24.651 11:06:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:24.651 11:06:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:24.651 11:06:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:24.651 11:06:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:24.651 11:06:14 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:24.651 11:06:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.651 11:06:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.651 11:06:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.651 11:06:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.651 11:06:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.910 11:06:14 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:24.910 11:06:14 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:24.910 11:06:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:24.910 11:06:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.910 11:06:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.910 11:06:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.910 11:06:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:25.168 11:06:14 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:25.168 11:06:14 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:25.168 11:06:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:25.426 11:06:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:25.426 11:06:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:25.426 11:06:15 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:25.426 11:06:15 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:25.426 11:06:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.684 11:06:15 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:25.684 11:06:15 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.mlMAHlCkMw 00:38:25.684 11:06:15 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.mlMAHlCkMw 00:38:25.684 11:06:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:25.684 11:06:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.mlMAHlCkMw 00:38:25.684 11:06:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:25.684 11:06:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:25.684 11:06:15 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:25.684 11:06:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:25.684 11:06:15 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mlMAHlCkMw 00:38:25.684 11:06:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mlMAHlCkMw 00:38:25.943 [2024-11-19 11:06:15.511552] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mlMAHlCkMw': 0100660 00:38:25.943 [2024-11-19 11:06:15.511575] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:25.943 request: 00:38:25.943 { 00:38:25.943 "name": "key0", 00:38:25.943 "path": "/tmp/tmp.mlMAHlCkMw", 00:38:25.943 "method": "keyring_file_add_key", 00:38:25.943 "req_id": 1 00:38:25.943 } 00:38:25.943 Got JSON-RPC error response 00:38:25.943 response: 00:38:25.943 { 00:38:25.943 "code": -1, 00:38:25.943 "message": "Operation not permitted" 00:38:25.943 } 00:38:25.943 11:06:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:25.943 11:06:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:25.943 11:06:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:25.943 11:06:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:25.943 11:06:15 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.mlMAHlCkMw 00:38:25.943 11:06:15 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mlMAHlCkMw 00:38:25.943 11:06:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mlMAHlCkMw 00:38:25.943 11:06:15 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.mlMAHlCkMw 00:38:25.943 11:06:15 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:25.943 11:06:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:25.943 11:06:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:25.943 11:06:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:25.943 11:06:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:25.943 11:06:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.201 11:06:15 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:26.201 11:06:15 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.201 11:06:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:26.201 11:06:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.201 11:06:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:26.201 11:06:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:26.201 11:06:15 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:26.201 11:06:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:26.201 11:06:15 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.202 11:06:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.459 [2024-11-19 11:06:16.089074] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.mlMAHlCkMw': No such file or directory 00:38:26.459 [2024-11-19 11:06:16.089091] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:26.459 [2024-11-19 11:06:16.089105] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:26.459 [2024-11-19 11:06:16.089112] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:26.459 [2024-11-19 11:06:16.089135] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:26.459 [2024-11-19 11:06:16.089141] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:26.459 request: 00:38:26.459 { 00:38:26.459 "name": "nvme0", 00:38:26.459 "trtype": "tcp", 00:38:26.459 "traddr": "127.0.0.1", 00:38:26.459 "adrfam": "ipv4", 00:38:26.459 "trsvcid": "4420", 00:38:26.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:26.459 "prchk_reftag": false, 00:38:26.459 "prchk_guard": false, 00:38:26.459 "hdgst": false, 00:38:26.459 "ddgst": false, 00:38:26.459 "psk": "key0", 00:38:26.459 "allow_unrecognized_csi": false, 00:38:26.459 "method": "bdev_nvme_attach_controller", 00:38:26.459 "req_id": 1 00:38:26.459 } 00:38:26.459 Got JSON-RPC error response 00:38:26.459 response: 00:38:26.459 { 00:38:26.459 "code": -19, 00:38:26.459 "message": "No such device" 00:38:26.459 } 00:38:26.459 11:06:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:26.459 11:06:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:26.460 11:06:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:26.460 11:06:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:26.460 11:06:16 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:26.460 11:06:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:26.717 11:06:16 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:26.717 11:06:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:26.717 11:06:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:26.717 11:06:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:26.717 11:06:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:26.717 11:06:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:26.717 11:06:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TnhXn3XaYo 00:38:26.717 11:06:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:26.717 11:06:16 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:26.717 11:06:16 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:26.717 11:06:16 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:26.717 11:06:16 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:26.717 11:06:16 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:26.717 11:06:16 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:26.717 11:06:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TnhXn3XaYo 00:38:26.717 11:06:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TnhXn3XaYo 00:38:26.717 11:06:16 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.TnhXn3XaYo 00:38:26.717 11:06:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TnhXn3XaYo 00:38:26.717 11:06:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TnhXn3XaYo 00:38:26.975 11:06:16 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.975 11:06:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:27.233 nvme0n1 00:38:27.233 11:06:16 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:27.233 11:06:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:27.233 11:06:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:27.233 11:06:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.233 11:06:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:27.233 11:06:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.491 11:06:17 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:27.491 11:06:17 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:27.491 11:06:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:27.491 11:06:17 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:27.491 11:06:17 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:27.491 11:06:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:27.491 11:06:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.491 11:06:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.749 11:06:17 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:27.749 11:06:17 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:27.749 11:06:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:27.749 11:06:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:27.749 11:06:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.749 11:06:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.749 11:06:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:28.008 11:06:17 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:28.008 11:06:17 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:28.008 11:06:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:28.008 11:06:17 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:28.008 11:06:17 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:28.008 11:06:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.266 11:06:18 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:28.266 11:06:18 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TnhXn3XaYo 00:38:28.266 11:06:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TnhXn3XaYo 00:38:28.525 11:06:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eSmu3IZwqR 00:38:28.525 11:06:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eSmu3IZwqR 00:38:28.783 11:06:18 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:28.783 11:06:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:29.041 nvme0n1 00:38:29.041 11:06:18 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:29.041 11:06:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:29.299 11:06:18 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:29.299 "subsystems": [ 00:38:29.299 { 00:38:29.299 "subsystem": "keyring", 00:38:29.299 "config": [ 00:38:29.299 { 00:38:29.299 "method": "keyring_file_add_key", 00:38:29.299 "params": { 00:38:29.299 "name": "key0", 00:38:29.299 "path": "/tmp/tmp.TnhXn3XaYo" 00:38:29.299 } 00:38:29.299 }, 00:38:29.299 { 00:38:29.299 "method": "keyring_file_add_key", 00:38:29.299 "params": { 00:38:29.299 "name": "key1", 00:38:29.299 "path": "/tmp/tmp.eSmu3IZwqR" 00:38:29.299 } 00:38:29.299 } 00:38:29.299 ] 00:38:29.299 }, 00:38:29.299 { 00:38:29.299 "subsystem": "iobuf", 00:38:29.300 "config": [ 00:38:29.300 { 00:38:29.300 "method": "iobuf_set_options", 00:38:29.300 "params": { 00:38:29.300 "small_pool_count": 8192, 00:38:29.300 "large_pool_count": 1024, 00:38:29.300 "small_bufsize": 8192, 00:38:29.300 "large_bufsize": 135168, 00:38:29.300 "enable_numa": false 00:38:29.300 } 00:38:29.300 } 00:38:29.300 ] 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "subsystem": "sock", 00:38:29.300 "config": [ 00:38:29.300 { 00:38:29.300 "method": "sock_set_default_impl", 00:38:29.300 "params": { 00:38:29.300 "impl_name": "posix" 00:38:29.300 } 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "method": "sock_impl_set_options", 00:38:29.300 "params": { 00:38:29.300 "impl_name": "ssl", 00:38:29.300 "recv_buf_size": 4096, 00:38:29.300 "send_buf_size": 4096, 00:38:29.300 "enable_recv_pipe": true, 00:38:29.300 "enable_quickack": false, 00:38:29.300 "enable_placement_id": 0, 00:38:29.300 "enable_zerocopy_send_server": true, 00:38:29.300 "enable_zerocopy_send_client": false, 00:38:29.300 "zerocopy_threshold": 0, 00:38:29.300 "tls_version": 0, 00:38:29.300 "enable_ktls": false 00:38:29.300 } 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "method": "sock_impl_set_options", 00:38:29.300 "params": { 00:38:29.300 "impl_name": "posix", 00:38:29.300 "recv_buf_size": 2097152, 00:38:29.300 "send_buf_size": 2097152, 00:38:29.300 "enable_recv_pipe": true, 00:38:29.300 "enable_quickack": false, 00:38:29.300 "enable_placement_id": 0, 00:38:29.300 "enable_zerocopy_send_server": true, 00:38:29.300 "enable_zerocopy_send_client": false, 00:38:29.300 "zerocopy_threshold": 0, 00:38:29.300 "tls_version": 0, 00:38:29.300 "enable_ktls": false 00:38:29.300 } 00:38:29.300 } 00:38:29.300 ] 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "subsystem": "vmd", 00:38:29.300 "config": [] 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "subsystem": "accel", 00:38:29.300 "config": [ 00:38:29.300 { 00:38:29.300 "method": "accel_set_options", 00:38:29.300 "params": { 00:38:29.300 "small_cache_size": 128, 00:38:29.300 "large_cache_size": 16, 00:38:29.300 "task_count": 2048, 00:38:29.300 "sequence_count": 2048, 00:38:29.300 "buf_count": 2048 00:38:29.300 } 00:38:29.300 } 00:38:29.300 ] 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "subsystem": "bdev", 00:38:29.300 "config": [ 00:38:29.300 { 00:38:29.300 "method": "bdev_set_options", 00:38:29.300 "params": { 00:38:29.300 "bdev_io_pool_size": 65535, 00:38:29.300 "bdev_io_cache_size": 256, 00:38:29.300 "bdev_auto_examine": true, 00:38:29.300 "iobuf_small_cache_size": 128, 00:38:29.300 "iobuf_large_cache_size": 16 00:38:29.300 } 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "method": "bdev_raid_set_options", 00:38:29.300 "params": { 00:38:29.300 "process_window_size_kb": 1024, 00:38:29.300 "process_max_bandwidth_mb_sec": 0 00:38:29.300 } 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "method": "bdev_iscsi_set_options", 00:38:29.300 "params": { 00:38:29.300 "timeout_sec": 30 00:38:29.300 } 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "method": "bdev_nvme_set_options", 00:38:29.300 "params": { 00:38:29.300 "action_on_timeout": "none", 00:38:29.300 "timeout_us": 0, 00:38:29.300 "timeout_admin_us": 0, 00:38:29.300 "keep_alive_timeout_ms": 10000, 00:38:29.300 "arbitration_burst": 0, 00:38:29.300 "low_priority_weight": 0, 00:38:29.300 "medium_priority_weight": 0, 00:38:29.300 "high_priority_weight": 0, 00:38:29.300 "nvme_adminq_poll_period_us": 10000, 00:38:29.300 "nvme_ioq_poll_period_us": 0, 00:38:29.300 "io_queue_requests": 512, 00:38:29.300 "delay_cmd_submit": true, 00:38:29.300 "transport_retry_count": 4, 00:38:29.300 "bdev_retry_count": 3, 00:38:29.300 "transport_ack_timeout": 0, 00:38:29.300 "ctrlr_loss_timeout_sec": 0, 00:38:29.300 "reconnect_delay_sec": 0, 00:38:29.300 "fast_io_fail_timeout_sec": 0, 00:38:29.300 "disable_auto_failback": false, 00:38:29.300 "generate_uuids": false, 00:38:29.300 "transport_tos": 0, 00:38:29.300 "nvme_error_stat": false, 00:38:29.300 "rdma_srq_size": 0, 00:38:29.300 "io_path_stat": false, 00:38:29.300 "allow_accel_sequence": false, 00:38:29.300 "rdma_max_cq_size": 0, 00:38:29.300 "rdma_cm_event_timeout_ms": 0, 00:38:29.300 "dhchap_digests": [ 00:38:29.300 "sha256", 00:38:29.300 "sha384", 00:38:29.300 "sha512" 00:38:29.300 ], 00:38:29.300 "dhchap_dhgroups": [ 00:38:29.300 "null", 00:38:29.300 "ffdhe2048", 00:38:29.300 "ffdhe3072", 00:38:29.300 "ffdhe4096", 00:38:29.300 "ffdhe6144", 00:38:29.300 "ffdhe8192" 00:38:29.300 ] 00:38:29.300 } 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "method": "bdev_nvme_attach_controller", 00:38:29.300 "params": { 00:38:29.300 "name": "nvme0", 00:38:29.300 "trtype": "TCP", 00:38:29.300 "adrfam": "IPv4", 00:38:29.300 "traddr": "127.0.0.1", 00:38:29.300 "trsvcid": "4420", 00:38:29.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:29.300 "prchk_reftag": false, 00:38:29.300 "prchk_guard": false, 00:38:29.300 "ctrlr_loss_timeout_sec": 0, 00:38:29.300 "reconnect_delay_sec": 0, 00:38:29.300 "fast_io_fail_timeout_sec": 0, 00:38:29.300 "psk": "key0", 00:38:29.300 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:29.300 "hdgst": false, 00:38:29.300 "ddgst": false, 00:38:29.300 "multipath": "multipath" 00:38:29.300 } 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "method": "bdev_nvme_set_hotplug", 00:38:29.300 "params": { 00:38:29.300 "period_us": 100000, 00:38:29.300 "enable": false 00:38:29.300 } 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "method": "bdev_wait_for_examine" 00:38:29.300 } 00:38:29.300 ] 00:38:29.300 }, 00:38:29.300 { 00:38:29.300 "subsystem": "nbd", 00:38:29.300 "config": [] 00:38:29.300 } 00:38:29.300 ] 00:38:29.300 }' 00:38:29.300 11:06:18 keyring_file -- keyring/file.sh@115 -- # killprocess 18463 00:38:29.300 11:06:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 18463 ']' 00:38:29.300 11:06:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 18463 00:38:29.300 11:06:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:29.300 11:06:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:29.300 11:06:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 18463 00:38:29.300 11:06:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:29.300 11:06:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:29.300 11:06:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 18463' 00:38:29.300 killing process with pid 18463 00:38:29.300 11:06:18 keyring_file -- common/autotest_common.sh@973 -- # kill 18463 00:38:29.300 Received shutdown signal, test time was about 1.000000 seconds 00:38:29.300 00:38:29.300 Latency(us) 00:38:29.300 [2024-11-19T10:06:19.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.300 [2024-11-19T10:06:19.092Z] =================================================================================================================== 00:38:29.300 [2024-11-19T10:06:19.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:29.300 11:06:18 keyring_file -- common/autotest_common.sh@978 -- # wait 18463 00:38:29.558 11:06:19 keyring_file -- keyring/file.sh@118 -- # bperfpid=19983 00:38:29.558 11:06:19 keyring_file -- keyring/file.sh@120 -- # waitforlisten 19983 /var/tmp/bperf.sock 00:38:29.558 11:06:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 19983 ']' 00:38:29.558 11:06:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:29.558 11:06:19 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:29.558 11:06:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:29.558 11:06:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:29.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:29.558 11:06:19 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:29.558 "subsystems": [ 00:38:29.558 { 00:38:29.558 "subsystem": "keyring", 00:38:29.558 "config": [ 00:38:29.558 { 00:38:29.558 "method": "keyring_file_add_key", 00:38:29.558 "params": { 00:38:29.558 "name": "key0", 00:38:29.558 "path": "/tmp/tmp.TnhXn3XaYo" 00:38:29.558 } 00:38:29.558 }, 00:38:29.558 { 00:38:29.558 "method": "keyring_file_add_key", 00:38:29.558 "params": { 00:38:29.558 "name": "key1", 00:38:29.558 "path": "/tmp/tmp.eSmu3IZwqR" 00:38:29.558 } 00:38:29.558 } 00:38:29.558 ] 00:38:29.558 }, 00:38:29.558 { 00:38:29.558 "subsystem": "iobuf", 00:38:29.558 "config": [ 00:38:29.558 { 00:38:29.558 "method": "iobuf_set_options", 00:38:29.558 "params": { 00:38:29.558 "small_pool_count": 8192, 00:38:29.558 "large_pool_count": 1024, 00:38:29.558 "small_bufsize": 8192, 00:38:29.558 "large_bufsize": 135168, 00:38:29.558 "enable_numa": false 00:38:29.558 } 00:38:29.558 } 00:38:29.558 ] 00:38:29.558 }, 00:38:29.558 { 00:38:29.558 "subsystem": "sock", 00:38:29.558 "config": [ 00:38:29.558 { 00:38:29.558 "method": "sock_set_default_impl", 00:38:29.559 "params": { 00:38:29.559 "impl_name": "posix" 00:38:29.559 } 00:38:29.559 }, 00:38:29.559 { 00:38:29.559 "method": "sock_impl_set_options", 00:38:29.559 "params": { 00:38:29.559 "impl_name": "ssl", 00:38:29.559 "recv_buf_size": 4096, 00:38:29.559 "send_buf_size": 4096, 00:38:29.559 "enable_recv_pipe": true, 00:38:29.559 "enable_quickack": false, 00:38:29.559 "enable_placement_id": 0, 00:38:29.559 "enable_zerocopy_send_server": true, 00:38:29.559 "enable_zerocopy_send_client": false, 00:38:29.559 "zerocopy_threshold": 0, 00:38:29.559 "tls_version": 0, 00:38:29.559 "enable_ktls": false 00:38:29.559 } 00:38:29.559 }, 00:38:29.559 { 00:38:29.559 "method": "sock_impl_set_options", 00:38:29.559 "params": { 00:38:29.559 "impl_name": "posix", 00:38:29.559 "recv_buf_size": 2097152, 00:38:29.559 "send_buf_size": 2097152, 00:38:29.559 "enable_recv_pipe": true, 00:38:29.559 "enable_quickack": false, 00:38:29.559 "enable_placement_id": 0, 00:38:29.559 "enable_zerocopy_send_server": true, 00:38:29.559 "enable_zerocopy_send_client": false, 00:38:29.559 "zerocopy_threshold": 0, 00:38:29.559 "tls_version": 0, 00:38:29.559 "enable_ktls": false 00:38:29.559 } 00:38:29.559 } 00:38:29.559 ] 00:38:29.559 }, 00:38:29.559 { 00:38:29.559 "subsystem": "vmd", 00:38:29.559 "config": [] 00:38:29.559 }, 00:38:29.559 { 00:38:29.559 "subsystem": "accel", 00:38:29.559 "config": [ 00:38:29.559 { 00:38:29.559 "method": "accel_set_options", 00:38:29.559 "params": { 00:38:29.559 "small_cache_size": 128, 00:38:29.559 "large_cache_size": 16, 00:38:29.559 "task_count": 2048, 00:38:29.559 "sequence_count": 2048, 00:38:29.559 "buf_count": 2048 00:38:29.559 } 00:38:29.559 } 00:38:29.559 ] 00:38:29.559 }, 00:38:29.559 { 00:38:29.559 "subsystem": "bdev", 00:38:29.559 "config": [ 00:38:29.559 { 00:38:29.559 "method": "bdev_set_options", 00:38:29.559 "params": { 00:38:29.559 "bdev_io_pool_size": 65535, 00:38:29.559 "bdev_io_cache_size": 256, 00:38:29.559 "bdev_auto_examine": true, 00:38:29.559 "iobuf_small_cache_size": 128, 00:38:29.559 "iobuf_large_cache_size": 16 00:38:29.559 } 00:38:29.559 }, 00:38:29.559 { 00:38:29.559 "method": "bdev_raid_set_options", 00:38:29.559 "params": { 00:38:29.559 "process_window_size_kb": 1024, 00:38:29.559 "process_max_bandwidth_mb_sec": 0 00:38:29.559 } 00:38:29.559 }, 00:38:29.559 { 00:38:29.559 "method": "bdev_iscsi_set_options", 00:38:29.559 "params": { 00:38:29.559 "timeout_sec": 30 00:38:29.559 } 00:38:29.559 }, 00:38:29.559 { 00:38:29.559 "method": "bdev_nvme_set_options", 00:38:29.559 "params": { 00:38:29.559 "action_on_timeout": "none", 00:38:29.559 "timeout_us": 0, 00:38:29.559 "timeout_admin_us": 0, 00:38:29.559 "keep_alive_timeout_ms": 10000, 00:38:29.559 "arbitration_burst": 0, 00:38:29.559 "low_priority_weight": 0, 00:38:29.559 "medium_priority_weight": 0, 00:38:29.559 "high_priority_weight": 0, 00:38:29.559 "nvme_adminq_poll_period_us": 10000, 00:38:29.559 "nvme_ioq_poll_period_us": 0, 00:38:29.559 "io_queue_requests": 512, 00:38:29.559 "delay_cmd_submit": true, 00:38:29.559 "transport_retry_count": 4, 00:38:29.559 "bdev_retry_count": 3, 00:38:29.559 "transport_ack_timeout": 0, 00:38:29.559 "ctrlr_loss_timeout_sec": 0, 00:38:29.559 "reconnect_delay_sec": 0, 00:38:29.559 "fast_io_fail_timeout_sec": 0, 00:38:29.559 "disable_auto_failback": false, 00:38:29.559 "generate_uuids": false, 00:38:29.559 "transport_tos": 0, 00:38:29.559 "nvme_error_stat": false, 00:38:29.559 "rdma_srq_size": 0, 00:38:29.559 "io_path_stat": false, 00:38:29.559 "allow_accel_sequence": false, 00:38:29.559 "rdma_max_cq_size": 0, 00:38:29.559 "rdma_cm_event_timeout_ms": 0, 00:38:29.559 "dhchap_digests": [ 00:38:29.559 "sha256", 00:38:29.559 "sha384", 00:38:29.559 "sha512" 00:38:29.559 ], 00:38:29.559 "dhchap_dhgroups": [ 00:38:29.559 "null", 00:38:29.559 "ffdhe2048", 00:38:29.559 "ffdhe3072", 00:38:29.559 "ffdhe4096", 00:38:29.559 "ffdhe6144", 00:38:29.559 "ffdhe8192" 00:38:29.559 ] 00:38:29.559 } 00:38:29.559 }, 00:38:29.559 { 00:38:29.559 "method": "bdev_nvme_attach_controller", 00:38:29.559 "params": { 00:38:29.560 "name": "nvme0", 00:38:29.560 "trtype": "TCP", 00:38:29.560 "adrfam": "IPv4", 00:38:29.560 "traddr": "127.0.0.1", 00:38:29.560 "trsvcid": "4420", 00:38:29.560 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:29.560 "prchk_reftag": false, 00:38:29.560 "prchk_guard": false, 00:38:29.560 "ctrlr_loss_timeout_sec": 0, 00:38:29.560 "reconnect_delay_sec": 0, 00:38:29.560 "fast_io_fail_timeout_sec": 0, 00:38:29.560 "psk": "key0", 00:38:29.560 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:29.560 "hdgst": false, 00:38:29.560 "ddgst": false, 00:38:29.560 "multipath": "multipath" 00:38:29.560 } 00:38:29.560 }, 00:38:29.560 { 00:38:29.560 "method": "bdev_nvme_set_hotplug", 00:38:29.560 "params": { 00:38:29.560 "period_us": 100000, 00:38:29.560 "enable": false 00:38:29.560 } 00:38:29.560 }, 00:38:29.560 { 00:38:29.560 "method": "bdev_wait_for_examine" 00:38:29.560 } 00:38:29.560 ] 00:38:29.560 }, 00:38:29.560 { 00:38:29.560 "subsystem": "nbd", 00:38:29.560 "config": [] 00:38:29.560 } 00:38:29.560 ] 00:38:29.560 }' 00:38:29.560 11:06:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:29.560 11:06:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:29.560 [2024-11-19 11:06:19.154560] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:38:29.560 [2024-11-19 11:06:19.154613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid19983 ] 00:38:29.560 [2024-11-19 11:06:19.229906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.560 [2024-11-19 11:06:19.267515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:29.818 [2024-11-19 11:06:19.427214] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:30.383 11:06:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:30.383 11:06:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:30.383 11:06:19 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:30.383 11:06:19 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:30.383 11:06:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.641 11:06:20 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:30.641 11:06:20 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:30.641 11:06:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:30.641 11:06:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:30.641 11:06:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:30.641 11:06:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:30.641 11:06:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.641 11:06:20 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:30.641 11:06:20 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:30.641 11:06:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:30.641 11:06:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:30.641 11:06:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:30.641 11:06:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:30.641 11:06:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.899 11:06:20 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:30.899 11:06:20 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:30.899 11:06:20 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:30.899 11:06:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:31.207 11:06:20 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:31.207 11:06:20 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:31.207 11:06:20 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.TnhXn3XaYo /tmp/tmp.eSmu3IZwqR 00:38:31.207 11:06:20 keyring_file -- keyring/file.sh@20 -- # killprocess 19983 00:38:31.207 11:06:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 19983 ']' 00:38:31.207 11:06:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 19983 00:38:31.207 11:06:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:31.207 11:06:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:31.207 11:06:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 19983 00:38:31.207 11:06:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:31.207 11:06:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:31.207 11:06:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 19983' 00:38:31.207 killing process with pid 19983 00:38:31.207 11:06:20 keyring_file -- common/autotest_common.sh@973 -- # kill 19983 00:38:31.207 Received shutdown signal, test time was about 1.000000 seconds 00:38:31.207 00:38:31.208 Latency(us) 00:38:31.208 [2024-11-19T10:06:21.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.208 [2024-11-19T10:06:21.000Z] =================================================================================================================== 00:38:31.208 [2024-11-19T10:06:21.000Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:31.208 11:06:20 keyring_file -- common/autotest_common.sh@978 -- # wait 19983 00:38:31.489 11:06:21 keyring_file -- keyring/file.sh@21 -- # killprocess 18422 00:38:31.489 11:06:21 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 18422 ']' 00:38:31.489 11:06:21 keyring_file -- common/autotest_common.sh@958 -- # kill -0 18422 00:38:31.489 11:06:21 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:31.489 11:06:21 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:31.489 11:06:21 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 18422 00:38:31.489 11:06:21 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:31.489 11:06:21 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:31.489 11:06:21 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 18422' 00:38:31.489 killing process with pid 18422 00:38:31.489 11:06:21 keyring_file -- common/autotest_common.sh@973 -- # kill 18422 00:38:31.489 11:06:21 keyring_file -- common/autotest_common.sh@978 -- # wait 18422 00:38:31.767 00:38:31.767 real 0m11.757s 00:38:31.767 user 0m29.094s 00:38:31.767 sys 0m2.799s 00:38:31.767 11:06:21 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:31.767 11:06:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:31.767 ************************************ 00:38:31.767 END TEST keyring_file 00:38:31.767 ************************************ 00:38:31.767 11:06:21 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:31.767 11:06:21 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:31.767 11:06:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:31.767 11:06:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:31.767 11:06:21 -- common/autotest_common.sh@10 -- # set +x 00:38:31.767 ************************************ 00:38:31.767 START TEST keyring_linux 00:38:31.767 ************************************ 00:38:31.767 11:06:21 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:31.767 Joined session keyring: 1007707912 00:38:31.767 * Looking for test storage... 00:38:31.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:31.767 11:06:21 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:31.767 11:06:21 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:38:31.767 11:06:21 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:32.030 11:06:21 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:32.030 11:06:21 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:32.030 11:06:21 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:32.030 11:06:21 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:32.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.030 --rc genhtml_branch_coverage=1 00:38:32.030 --rc genhtml_function_coverage=1 00:38:32.030 --rc genhtml_legend=1 00:38:32.030 --rc geninfo_all_blocks=1 00:38:32.030 --rc geninfo_unexecuted_blocks=1 00:38:32.030 00:38:32.030 ' 00:38:32.030 11:06:21 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:32.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.030 --rc genhtml_branch_coverage=1 00:38:32.030 --rc genhtml_function_coverage=1 00:38:32.030 --rc genhtml_legend=1 00:38:32.030 --rc geninfo_all_blocks=1 00:38:32.030 --rc geninfo_unexecuted_blocks=1 00:38:32.030 00:38:32.030 ' 00:38:32.030 11:06:21 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:32.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.030 --rc genhtml_branch_coverage=1 00:38:32.030 --rc genhtml_function_coverage=1 00:38:32.030 --rc genhtml_legend=1 00:38:32.030 --rc geninfo_all_blocks=1 00:38:32.030 --rc geninfo_unexecuted_blocks=1 00:38:32.030 00:38:32.030 ' 00:38:32.031 11:06:21 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:32.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.031 --rc genhtml_branch_coverage=1 00:38:32.031 --rc genhtml_function_coverage=1 00:38:32.031 --rc genhtml_legend=1 00:38:32.031 --rc geninfo_all_blocks=1 00:38:32.031 --rc geninfo_unexecuted_blocks=1 00:38:32.031 00:38:32.031 ' 00:38:32.031 11:06:21 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:32.031 11:06:21 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:32.031 11:06:21 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:32.031 11:06:21 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:32.031 11:06:21 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:32.031 11:06:21 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:32.031 11:06:21 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:32.031 11:06:21 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:32.031 11:06:21 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:32.031 11:06:21 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:32.031 11:06:21 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:32.032 11:06:21 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:32.032 11:06:21 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:32.032 11:06:21 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:38:32.032 11:06:21 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:38:32.032 11:06:21 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:32.032 11:06:21 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:32.032 11:06:21 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:32.032 11:06:21 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:32.032 11:06:21 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:32.032 11:06:21 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:32.032 11:06:21 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:32.032 11:06:21 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:32.032 11:06:21 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:32.032 11:06:21 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.033 11:06:21 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.033 11:06:21 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.033 11:06:21 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:32.033 11:06:21 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.033 11:06:21 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:32.033 11:06:21 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:32.033 11:06:21 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:32.033 11:06:21 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:32.033 11:06:21 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:32.033 11:06:21 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:32.033 11:06:21 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:32.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:32.033 11:06:21 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:32.033 11:06:21 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:32.033 11:06:21 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:32.033 11:06:21 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:32.033 11:06:21 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:32.033 11:06:21 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:32.033 11:06:21 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:32.033 11:06:21 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:32.033 11:06:21 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:32.033 11:06:21 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:32.033 11:06:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:32.033 11:06:21 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:32.033 11:06:21 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:32.033 11:06:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:32.034 /tmp/:spdk-test:key0 00:38:32.034 11:06:21 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:32.034 11:06:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:32.034 11:06:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:32.034 /tmp/:spdk-test:key1 00:38:32.034 11:06:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=20547 00:38:32.034 11:06:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 20547 00:38:32.034 11:06:21 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:32.034 11:06:21 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 20547 ']' 00:38:32.034 11:06:21 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:32.034 11:06:21 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:32.034 11:06:21 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:32.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:32.036 11:06:21 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:32.036 11:06:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:32.036 [2024-11-19 11:06:21.797300] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:38:32.036 [2024-11-19 11:06:21.797352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20547 ] 00:38:32.294 [2024-11-19 11:06:21.870189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.294 [2024-11-19 11:06:21.911835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.553 11:06:22 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:32.553 11:06:22 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:32.553 11:06:22 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:32.553 11:06:22 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.553 11:06:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:32.553 [2024-11-19 11:06:22.134250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:32.553 null0 00:38:32.554 [2024-11-19 11:06:22.166304] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:32.554 [2024-11-19 11:06:22.166676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:32.554 11:06:22 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.554 11:06:22 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:32.554 824150399 00:38:32.554 11:06:22 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:32.554 763058704 00:38:32.554 11:06:22 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=20552 00:38:32.554 11:06:22 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:32.554 11:06:22 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 20552 /var/tmp/bperf.sock 00:38:32.554 11:06:22 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 20552 ']' 00:38:32.554 11:06:22 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:32.554 11:06:22 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:32.554 11:06:22 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:32.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:32.554 11:06:22 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:32.554 11:06:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:32.554 [2024-11-19 11:06:22.238884] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:38:32.554 [2024-11-19 11:06:22.238928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20552 ] 00:38:32.554 [2024-11-19 11:06:22.314473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.812 [2024-11-19 11:06:22.357013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:32.812 11:06:22 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:32.812 11:06:22 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:32.812 11:06:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:32.812 11:06:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:32.812 11:06:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:32.812 11:06:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:33.070 11:06:22 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:33.070 11:06:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:33.329 [2024-11-19 11:06:23.004023] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:33.329 nvme0n1 00:38:33.329 11:06:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:33.329 11:06:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:33.329 11:06:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:33.329 11:06:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:33.329 11:06:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:33.329 11:06:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:33.587 11:06:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:33.587 11:06:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:33.587 11:06:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:33.587 11:06:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:33.587 11:06:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:33.587 11:06:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:33.587 11:06:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:33.845 11:06:23 keyring_linux -- keyring/linux.sh@25 -- # sn=824150399 00:38:33.845 11:06:23 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:33.845 11:06:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:33.845 11:06:23 keyring_linux -- keyring/linux.sh@26 -- # [[ 824150399 == \8\2\4\1\5\0\3\9\9 ]] 00:38:33.845 11:06:23 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 824150399 00:38:33.845 11:06:23 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:33.845 11:06:23 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:33.845 Running I/O for 1 seconds... 00:38:35.219 21720.00 IOPS, 84.84 MiB/s 00:38:35.219 Latency(us) 00:38:35.219 [2024-11-19T10:06:25.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.219 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:35.219 nvme0n1 : 1.01 21720.80 84.85 0.00 0.00 5874.19 4618.73 13793.77 00:38:35.219 [2024-11-19T10:06:25.011Z] =================================================================================================================== 00:38:35.219 [2024-11-19T10:06:25.011Z] Total : 21720.80 84.85 0.00 0.00 5874.19 4618.73 13793.77 00:38:35.219 { 00:38:35.219 "results": [ 00:38:35.219 { 00:38:35.219 "job": "nvme0n1", 00:38:35.219 "core_mask": "0x2", 00:38:35.219 "workload": "randread", 00:38:35.219 "status": "finished", 00:38:35.219 "queue_depth": 128, 00:38:35.219 "io_size": 4096, 00:38:35.219 "runtime": 1.005902, 00:38:35.219 "iops": 21720.803815878684, 00:38:35.219 "mibps": 84.84688990577611, 00:38:35.219 "io_failed": 0, 00:38:35.219 "io_timeout": 0, 00:38:35.219 "avg_latency_us": 5874.1933218693675, 00:38:35.219 "min_latency_us": 4618.727619047619, 00:38:35.219 "max_latency_us": 13793.76761904762 00:38:35.219 } 00:38:35.219 ], 00:38:35.219 "core_count": 1 00:38:35.219 } 00:38:35.219 11:06:24 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:35.219 11:06:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:35.219 11:06:24 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:35.219 11:06:24 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:35.219 11:06:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:35.219 11:06:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:35.219 11:06:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:35.219 11:06:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.477 11:06:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:35.477 11:06:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:35.477 11:06:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:35.478 11:06:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:35.478 [2024-11-19 11:06:25.215037] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:35.478 [2024-11-19 11:06:25.215627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254ba70 (107): Transport endpoint is not connected 00:38:35.478 [2024-11-19 11:06:25.216622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254ba70 (9): Bad file descriptor 00:38:35.478 [2024-11-19 11:06:25.217623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:35.478 [2024-11-19 11:06:25.217638] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:35.478 [2024-11-19 11:06:25.217645] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:35.478 [2024-11-19 11:06:25.217653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:35.478 request: 00:38:35.478 { 00:38:35.478 "name": "nvme0", 00:38:35.478 "trtype": "tcp", 00:38:35.478 "traddr": "127.0.0.1", 00:38:35.478 "adrfam": "ipv4", 00:38:35.478 "trsvcid": "4420", 00:38:35.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:35.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:35.478 "prchk_reftag": false, 00:38:35.478 "prchk_guard": false, 00:38:35.478 "hdgst": false, 00:38:35.478 "ddgst": false, 00:38:35.478 "psk": ":spdk-test:key1", 00:38:35.478 "allow_unrecognized_csi": false, 00:38:35.478 "method": "bdev_nvme_attach_controller", 00:38:35.478 "req_id": 1 00:38:35.478 } 00:38:35.478 Got JSON-RPC error response 00:38:35.478 response: 00:38:35.478 { 00:38:35.478 "code": -5, 00:38:35.478 "message": "Input/output error" 00:38:35.478 } 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@33 -- # sn=824150399 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 824150399 00:38:35.478 1 links removed 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@33 -- # sn=763058704 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 763058704 00:38:35.478 1 links removed 00:38:35.478 11:06:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 20552 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 20552 ']' 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 20552 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:35.478 11:06:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 20552 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 20552' 00:38:35.736 killing process with pid 20552 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 20552 00:38:35.736 Received shutdown signal, test time was about 1.000000 seconds 00:38:35.736 00:38:35.736 Latency(us) 00:38:35.736 [2024-11-19T10:06:25.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.736 [2024-11-19T10:06:25.528Z] =================================================================================================================== 00:38:35.736 [2024-11-19T10:06:25.528Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 20552 00:38:35.736 11:06:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 20547 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 20547 ']' 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 20547 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 20547 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 20547' 00:38:35.736 killing process with pid 20547 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 20547 00:38:35.736 11:06:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 20547 00:38:36.304 00:38:36.304 real 0m4.378s 00:38:36.304 user 0m8.175s 00:38:36.304 sys 0m1.509s 00:38:36.304 11:06:25 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:36.304 11:06:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:36.304 ************************************ 00:38:36.304 END TEST keyring_linux 00:38:36.304 ************************************ 00:38:36.304 11:06:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:36.304 11:06:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:36.304 11:06:25 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:36.304 11:06:25 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:36.304 11:06:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:36.304 11:06:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:36.304 11:06:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:36.304 11:06:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:36.304 11:06:25 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:36.304 11:06:25 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:36.304 11:06:25 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:36.304 11:06:25 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:36.304 11:06:25 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:36.304 11:06:25 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:36.304 11:06:25 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:36.304 11:06:25 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:36.304 11:06:25 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:36.304 11:06:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:36.304 11:06:25 -- common/autotest_common.sh@10 -- # set +x 00:38:36.304 11:06:25 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:36.304 11:06:25 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:36.304 11:06:25 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:36.304 11:06:25 -- common/autotest_common.sh@10 -- # set +x 00:38:41.573 INFO: APP EXITING 00:38:41.573 INFO: killing all VMs 00:38:41.573 INFO: killing vhost app 00:38:41.573 INFO: EXIT DONE 00:38:44.109 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:38:44.109 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:38:44.109 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:38:44.368 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:38:44.368 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:38:47.658 Cleaning 00:38:47.658 Removing: /var/run/dpdk/spdk0/config 00:38:47.658 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:47.658 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:47.658 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:47.658 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:47.658 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:47.658 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:47.658 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:47.658 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:47.658 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:47.658 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:47.658 Removing: /var/run/dpdk/spdk1/config 00:38:47.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:47.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:47.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:47.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:47.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:47.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:47.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:47.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:47.658 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:47.658 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:47.658 Removing: /var/run/dpdk/spdk2/config 00:38:47.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:47.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:47.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:47.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:47.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:47.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:47.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:47.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:47.658 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:47.658 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:47.658 Removing: /var/run/dpdk/spdk3/config 00:38:47.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:47.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:47.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:47.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:47.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:47.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:47.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:47.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:47.658 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:47.658 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:47.658 Removing: /var/run/dpdk/spdk4/config 00:38:47.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:47.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:47.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:47.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:47.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:47.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:47.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:47.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:47.658 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:47.658 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:47.658 Removing: /dev/shm/bdev_svc_trace.1 00:38:47.658 Removing: /dev/shm/nvmf_trace.0 00:38:47.658 Removing: /dev/shm/spdk_tgt_trace.pid3733350 00:38:47.658 Removing: /var/run/dpdk/spdk0 00:38:47.658 Removing: /var/run/dpdk/spdk1 00:38:47.658 Removing: /var/run/dpdk/spdk2 00:38:47.658 Removing: /var/run/dpdk/spdk3 00:38:47.658 Removing: /var/run/dpdk/spdk4 00:38:47.658 Removing: /var/run/dpdk/spdk_pid10243 00:38:47.658 Removing: /var/run/dpdk/spdk_pid10701 00:38:47.658 Removing: /var/run/dpdk/spdk_pid13183 00:38:47.658 Removing: /var/run/dpdk/spdk_pid13648 00:38:47.658 Removing: /var/run/dpdk/spdk_pid14116 00:38:47.658 Removing: /var/run/dpdk/spdk_pid18422 00:38:47.658 Removing: /var/run/dpdk/spdk_pid18463 00:38:47.658 Removing: /var/run/dpdk/spdk_pid19983 00:38:47.659 Removing: /var/run/dpdk/spdk_pid20547 00:38:47.659 Removing: /var/run/dpdk/spdk_pid20552 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3730962 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3732096 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3733350 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3733983 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3734930 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3735174 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3736147 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3736243 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3736519 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3738255 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3739531 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3739817 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3740104 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3740414 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3740704 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3740881 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3741044 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3741361 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3742065 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3745161 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3745371 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3745540 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3745648 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3746036 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3746058 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3746533 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3746670 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3746920 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3747036 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3747292 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3747300 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3747865 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3748114 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3748407 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3752249 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3756739 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3767383 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3767895 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3772356 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3772612 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3776874 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3782768 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3785597 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3795952 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3805321 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3807314 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3808240 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3825107 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3829024 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3874885 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3880154 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3885909 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3892355 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3892398 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3893131 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3894014 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3894925 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3895400 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3895568 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3895847 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3895860 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3895865 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3896778 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3897695 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3898699 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3899204 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3899206 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3899606 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3900973 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3902019 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3910141 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3939029 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3943561 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3945170 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3947002 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3947230 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3947258 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3947484 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3947989 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3949779 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3950591 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3951086 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3953216 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3953919 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3954423 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3958697 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3964320 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3964321 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3964322 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3968229 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3976692 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3981219 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3987447 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3988533 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3990086 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3991399 00:38:47.659 Removing: /var/run/dpdk/spdk_pid3996098 00:38:47.659 Removing: /var/run/dpdk/spdk_pid4000447 00:38:47.659 Removing: /var/run/dpdk/spdk_pid4004536 00:38:47.659 Removing: /var/run/dpdk/spdk_pid4012064 00:38:47.659 Removing: /var/run/dpdk/spdk_pid4012072 00:38:47.659 Removing: /var/run/dpdk/spdk_pid4016788 00:38:47.659 Removing: /var/run/dpdk/spdk_pid4017012 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4017239 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4017506 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4017638 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4022212 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4022785 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4027193 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4030386 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4035666 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4041330 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4049934 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4057142 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4057144 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4076563 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4077145 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4077633 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4078144 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4078848 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4079424 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4080011 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4080488 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4084739 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4084977 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4090928 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4091098 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4096588 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4100825 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4110568 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4111252 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4115297 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4115539 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4119901 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4125975 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4128520 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4138668 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4147343 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4148974 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4149867 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4166083 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4170058 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4173227 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4181159 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4181210 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4186250 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4188216 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4190174 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4191223 00:38:47.919 Removing: /var/run/dpdk/spdk_pid4193199 00:38:47.919 Removing: /var/run/dpdk/spdk_pid717 00:38:47.919 Removing: /var/run/dpdk/spdk_pid9781 00:38:47.919 Clean 00:38:48.179 11:06:37 -- common/autotest_common.sh@1453 -- # return 0 00:38:48.179 11:06:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:48.179 11:06:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:48.179 11:06:37 -- common/autotest_common.sh@10 -- # set +x 00:38:48.179 11:06:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:48.179 11:06:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:48.179 11:06:37 -- common/autotest_common.sh@10 -- # set +x 00:38:48.179 11:06:37 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:48.179 11:06:37 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:48.179 11:06:37 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:48.179 11:06:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:48.179 11:06:37 -- spdk/autotest.sh@398 -- # hostname 00:38:48.179 11:06:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:48.437 geninfo: WARNING: invalid characters removed from testname! 00:39:10.361 11:06:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:11.740 11:07:01 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:13.643 11:07:03 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:15.547 11:07:04 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:16.924 11:07:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:18.827 11:07:08 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:20.729 11:07:10 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:20.729 11:07:10 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:20.729 11:07:10 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:20.729 11:07:10 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:20.729 11:07:10 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:20.729 11:07:10 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:20.729 + [[ -n 3653714 ]] 00:39:20.729 + sudo kill 3653714 00:39:20.739 [Pipeline] } 00:39:20.755 [Pipeline] // stage 00:39:20.760 [Pipeline] } 00:39:20.775 [Pipeline] // timeout 00:39:20.780 [Pipeline] } 00:39:20.794 [Pipeline] // catchError 00:39:20.799 [Pipeline] } 00:39:20.814 [Pipeline] // wrap 00:39:20.819 [Pipeline] } 00:39:20.832 [Pipeline] // catchError 00:39:20.841 [Pipeline] stage 00:39:20.844 [Pipeline] { (Epilogue) 00:39:20.857 [Pipeline] catchError 00:39:20.859 [Pipeline] { 00:39:20.871 [Pipeline] echo 00:39:20.872 Cleanup processes 00:39:20.878 [Pipeline] sh 00:39:21.161 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:21.161 31239 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:21.173 [Pipeline] sh 00:39:21.454 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:21.454 ++ grep -v 'sudo pgrep' 00:39:21.454 ++ awk '{print $1}' 00:39:21.454 + sudo kill -9 00:39:21.454 + true 00:39:21.467 [Pipeline] sh 00:39:21.752 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:33.991 [Pipeline] sh 00:39:34.281 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:34.281 Artifacts sizes are good 00:39:34.297 [Pipeline] archiveArtifacts 00:39:34.305 Archiving artifacts 00:39:34.475 [Pipeline] sh 00:39:34.811 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:34.827 [Pipeline] cleanWs 00:39:34.838 [WS-CLEANUP] Deleting project workspace... 00:39:34.838 [WS-CLEANUP] Deferred wipeout is used... 00:39:34.846 [WS-CLEANUP] done 00:39:34.848 [Pipeline] } 00:39:34.868 [Pipeline] // catchError 00:39:34.884 [Pipeline] sh 00:39:35.167 + logger -p user.info -t JENKINS-CI 00:39:35.176 [Pipeline] } 00:39:35.191 [Pipeline] // stage 00:39:35.197 [Pipeline] } 00:39:35.212 [Pipeline] // node 00:39:35.218 [Pipeline] End of Pipeline 00:39:35.264 Finished: SUCCESS